hadoop2的Wordcount测试出错,急!!!

[hadoop@dsp16 hadoop-2.6.0]$ bin/hadoop jar /home/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /input/ /output/
16/02/09 03:26:57 INFO impl.TimelineClientImpl: Timeline service address: http://dsp16:8188/ws/v1/timeline/
16/02/09 03:26:58 INFO input.FileInputFormat: Total input paths to process : 1
16/02/09 03:26:58 INFO mapreduce.JobSubmitter: number of splits:1
16/02/09 03:26:59 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1454948804760_0002
16/02/09 03:26:59 INFO impl.YarnClientImpl: Submitted application application_1454948804760_0002
16/02/09 03:26:59 INFO mapreduce.Job: The url to track the job: http://dsp16:8088/proxy/application_1454948804760_0002/
16/02/09 03:26:59 INFO mapreduce.Job: Running job: job_1454948804760_0002

yarn日志:
2016-02-08 17:36:54,055 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: dsp18/192.168.0.120:38540. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-02-08 17:36:55,056 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: dsp18/192.168.0.120:38540. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-02-08 17:36:55,057 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error cleaning master
java.net.ConnectException: Call From dsp16/192.168.0.118 to dsp18:38540 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy35.stopContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.stopContainers(ContainerManagementProtocolPBClientImpl.java:110)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.cleanup(AMLauncher.java:139)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
2016-02-09 03:26:58,138 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 2

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
运行hadoop的mapreduce任务出错!!!
在运行hadoop的mapreduce任务时,如果是小文件的话是能够执行成功,但是在运行大一点文件的时候就会出现以下的报错。网上找了很多资料都不能解决我的这个问题,不知道有哪位大神能够解答以下呢。 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) at org.apache.hadoop.ipc.Client.call(Client.java:1427) at org.apache.hadoop.ipc.Client.call(Client.java:1337) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733) at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)
windwos平台搭建hadoop单点测试wordcount出错
![图片说明](https://img-ask.csdn.net/upload/201801/17/1516163926_549558.png) datanode namenode nodemanager resourcemanager 正常启动 执行hadoop jar hadoop-mapreduce-examples-2.8.3.jar wordcount /input/test.txt /output 后出现如上错误
运行Hadoop中的Wordcount时报错?
运行Hadoop中的Wordcount时报“Error: java.io.FileNotFoundException: Path is not a file: /tmp/hadoop-yarn”?
hadoop运行wordcount时main程序的输入参数不是2个
java的wordcount代码图 ![图片说明](https://img-ask.csdn.net/upload/201911/10/1573378042_516136.jpg) hadoop执行wordcount结果,显示输入参数不是2个 Usage: wordcount <in> <out> ![图片说明](https://img-ask.csdn.net/upload/201911/10/1573378135_729511.jpg)
Hadoop测试任务wordcount失败
今天采用VMware和Ubuntu搭建了完全分布式环境,一台master 两台slave,hadoop是用的2.5.2. master和slave上的进程都能跑起来。可以通过hadoop fs进行文件块的上传和下载。 我想试一下wordcount的任务,但是任务提交后可以下发到slave上创建任务去执行,但是中间不知道出现什么错误,slave上的ssh会闪断一次,并且该台slave上面的datanode和nodeManager会被杀掉。换一台slave会出现同样的问题。查看日志,没有明显错误,只是显示收到 ``` 2016-10-31 15:28:42,236 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: RECEIVED SIGNAL 15: SIGTERM ``` 求大神指点,这是啥原因!!!! 日志路径:[http://pan.baidu.com/s/1eS0sRui](http://pan.baidu.com/s/1eS0sRui "日志地址")
hadoop streaming 求助!!!!!!!!
单机伪分布模式下 老报ERROR求解决 [root@localhost hadoop-1.0.1]# bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.1.jar -file /home/hul/HadoopTestMap -file /home/hul/HadoopTestReduce -mapper /user/root/in/HadoopTestMap -reducer /user/root/in/HadoopTestReduce -input /user/root/in/Test.txt -output /user/root/out/123.txt packageJobJar: [/home/hul/HadoopTestMap, /home/hul/HadoopTestReduce, /tmp/hadoop-root/hadoop-unjar8224371331961097912/] [] /tmp/streamjob8473444495668295229.jar tmpDir=null 14/03/07 21:22:41 INFO mapred.FileInputFormat: Total input paths to process : 1 14/03/07 21:22:42 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-root/mapred/local] 14/03/07 21:22:42 INFO streaming.StreamJob: Running job: job_201403072024_0003 14/03/07 21:22:42 INFO streaming.StreamJob: To kill this job, run: 14/03/07 21:22:42 INFO streaming.StreamJob: /root/hadoop-1.0.1/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201403072024_0003 14/03/07 21:22:42 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201403072024_0003 14/03/07 21:22:43 INFO streaming.StreamJob: map 0% reduce 0% 14/03/07 21:23:47 INFO streaming.StreamJob: map 100% reduce 100% 14/03/07 21:23:47 INFO streaming.StreamJob: To kill this job, run: 14/03/07 21:23:47 INFO streaming.StreamJob: /root/hadoop-1.0.1/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201403072024_0003 14/03/07 21:23:47 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201403072024_0003 14/03/07 21:23:47 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201403072024_0003_m_000000 14/03/07 21:23:47 INFO streaming.StreamJob: killJob...
Hadoop集群执行wordcount出现的一些报错信息
我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!
Hadoop 2.2运行wordcount报错
hadoop 2.2 + jdk1.7 运行wordcount例子 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /word /ws 报错: org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1449733659077_0001_m_000000_0: Error: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.InputSplit 请各位高手指点
hadoop入门测试Wordcount程序问题
跑wc程序是显示下面,wc没有输出值,请问是什么问题? Stack trace: ExitCodeException exitCode=1: /bin/mv: 目标"/nm-local-dir/nmPrivate/application_1478712687631_0006/container_1478712687631_0006_02_000001/container_1478712687631_0006_02_000001.pid" 不是目录
hadoop伪分布式的问题!!!!!
伪分布式读取的是HDFS中的文件为什么可以将单机步骤中创建的本地 input文件夹 和output文件夹删掉
新手,hadoop上运行wordcount程序报错
运行的环境是:Ubuntu14.04+hadoop2.6.1 用的是virtualBox虚拟机,然后安装了一个master和三个slave节点 hadoop是可以成功启动的,没有任何问题 在Ubuntu安装了eclipse,用java写了word count的程序,源码如下: ``` package wordcount; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; /** * @author * @version 创建时间:2017年9月9日 上午8:50:51 类说明 */ public class Wordcount { public static class TokenizerMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { StringTokenizer line = new StringTokenizer(value.toString()); while (line.hasMoreTokens()) { word.set(line.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); protected void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable obj : values) { sum += obj.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(Wordcount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/user/hduser/demo/test.txt")); FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/user/hduser/demo/wordcount")); //FileInputFormat.addInputPath(job, new Path(args[0])); //FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 启动hadoop后,在eclipse中直接运行上面的程序,运行成功,生成了wordcount文件夹,里面有_SUCCESS文件,也有统计的结果文件 然后我想把程序打包成jar文件来运行,先把上面程序中的: ``` FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/user/hduser/demo/test.txt")); FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/user/hduser/demo/wordcount")); ``` 改成如下: ``` FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); ``` 就是通过终端输入这两个参数 用eclipse的export打包成jar文件,后在终端输入: ``` hadoop jar wordcount.jar wordcount.Wordcount hdfs://master:9000/user/hduser/demo/test.txt hdfs://master:9000/user/hduser/demo/wordcount ``` 运行就报错了,报错情况如下: ``` 17/09/09 11:18:53 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.56.100:8050 17/09/09 11:18:54 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 17/09/09 11:18:55 INFO input.FileInputFormat: Total input paths to process : 1 17/09/09 11:18:55 INFO mapreduce.JobSubmitter: number of splits:1 17/09/09 11:18:55 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1504926710828_0001 17/09/09 11:18:56 INFO impl.YarnClientImpl: Submitted application application_1504926710828_0001 17/09/09 11:18:56 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1504926710828_0001/ 17/09/09 11:18:56 INFO mapreduce.Job: Running job: job_1504926710828_0001 17/09/09 11:19:14 INFO mapreduce.Job: Job job_1504926710828_0001 running in uber mode : false 17/09/09 11:19:14 INFO mapreduce.Job: map 0% reduce 0% 17/09/09 11:19:14 INFO mapreduce.Job: Job job_1504926710828_0001 failed with state FAILED due to: Application application_1504926710828_0001 failed 2 times due to AM Container for appattempt_1504926710828_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://master:8088/proxy/application_1504926710828_0001/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1504926710828_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) at java.lang.Thread.run(Thread.java:748) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 17/09/09 11:19:14 INFO mapreduce.Job: Counters: 0 ``` 去查了下日志文件, ``` 2017-09-09 11:18:55,869 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job.xml is closed by DFSClient_NONMAPREDUCE_-1306163227_1 2017-09-09 11:18:59,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2017-09-09 11:18:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2017-09-09 11:19:12,241 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.56.102:53610 Call#7 Retry#0: java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist 2017-09-09 11:19:12,293 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.56.102:53610 Call#8 Retry#0: java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist 2017-09-09 11:19:29,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2017-09-09 11:19:29,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.56.100 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 29 2017-09-09 11:19:42,635 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 40 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 27 SyncTimes(ms): 545 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 40 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 28 SyncTimes(ms): 613 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_inprogress_0000000000000000029 -> /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_0000000000000000029-0000000000000000068 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 69 2017-09-09 11:19:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2017-09-09 11:19:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:20:29,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2017-09-09 11:20:29,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.56.100 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 69 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 24 2017-09-09 11:20:42,791 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 56 ``` 在这里面报了一个错误: ``` java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist ``` 新手,不知道怎么办了
solaris部署hadoop集群跑wordcount报错
solaris部署hadoop集群跑wordcount报错, 信息如下: [admin@4bf635fa-5f3e-4b47-b42d-7558a6f0bbff ~]$ hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output 15/08/20 00:48:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/08/20 00:48:10 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.77.27:8032 15/08/20 00:48:11 INFO input.FileInputFormat: Total input paths to process : 3 15/08/20 00:48:11 INFO mapreduce.JobSubmitter: number of splits:3 15/08/20 00:48:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1439973008617_0002 15/08/20 00:48:13 INFO impl.YarnClientImpl: Submitted application application_1439973008617_0002 15/08/20 00:48:13 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1439973008617_0002/ 15/08/20 00:48:13 INFO mapreduce.Job: Running job: job_1439973008617_0002 15/08/20 00:48:35 INFO mapreduce.Job: Job job_1439973008617_0002 running in uber mode : false 15/08/20 00:48:35 INFO mapreduce.Job: map 0% reduce 0% 15/08/20 00:48:35 INFO mapreduce.Job: Job job_1439973008617_0002 failed with state FAILED due to: Application application_1439973008617_0002 failed 2 times due to Error launching appattempt_1439973008617_0002_000002. Got exception: java.net.ConnectException: Call From localhost/127.0.0.1 to localhost:37524 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1407) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy83.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1446) ... 9 more . Failing the application. 15/08/20 00:48:35 INFO mapreduce.Job: Counters: 0 hadoop dfsadmin -report 统计信息如下: [admin@4bf635fa-5f3e-4b47-b42d-7558a6f0bbff ~]$ hadoop dfsadmin -report DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/08/20 00:49:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 344033951744 (320.41 GB) Present Capacity: 342854029952 (319.31 GB) DFS Remaining: 342853630464 (319.31 GB) DFS Used: 399488 (390.13 KB) DFS Used%: 0.00% Under replicated blocks: 7 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (1): Name: 192.168.77.28:50010 (slave1) Hostname: localhost Decommission Status : Normal Configured Capacity: 344033951744 (320.41 GB) DFS Used: 399488 (390.13 KB) Non DFS Used: 1179921792 (1.10 GB) DFS Remaining: 342853630464 (319.31 GB) DFS Used%: 0.00% DFS Remaining%: 99.66% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu Aug 20 00:49:36 UTC 2015 192.168.77.27(master)跑ResourceManager和NameNode,192.168.77.28(slave)跑DataNode和NodeManger,貌似hadoop把DataNode和NodeManger的hostname都认成localhost了,所以才找不到报错了。 同样的配置在centos一切正常,在solaris就报错 ![NodeManger信息](https://img-ask.csdn.net/upload/201508/20/1440038080_659898.png) ![DataNode信息](https://img-ask.csdn.net/upload/201508/20/1440038161_998191.png)
虚拟机下构架hadoop测试跑wordcount报错,求大神帮帮忙.
用虚拟机虚拟一台namenode,三台datanode,配置完成可以运行起来,也可以在网页中查看状态,但是跑wordcount时出现了task id:attempt_1441184180788_0001 status:failed的错误,也没有抛出,实在没有办法,具体问题截屏如下,望大神们给予帮助![图片说明](https://img-ask.csdn.net/upload/201509/02/1441191088_814516.png)![图片说明](https://img-ask.csdn.net/upload/201509/02/1441191101_286802.png)
hadoop jar 运行hadoop自带wordcount报错
安装完hadoop-2.7.3之后,想试试其自带的wordcount例子,但是一直报下面的错,空间不足。。 但是我都还没开始用,怎么会出这种问题。。我是单机上运行的伪分布式,其他的都能正常运行,hdfs也是正常的,能在其中导入文件啥的,但是就是不能用hadoop jar命名运行![图片说明](https://img-ask.csdn.net/upload/201706/05/1496656412_150716.png)
Hadoop测试Wordcount出现的错误
启动Hadoop-2.7.0 ,sbin/start-dfs.sh 。用浏览器打开http://localhost:50070 ,出现以下错误提示:Non Heap Memory used 21.41MB of 22.62MB commited Non Heap Memory. Max Non Heap Memory is -1B 通过百度知道是非堆内存溢出的问题,可是应该怎么解决?
hadoop重启后测试Wordcount的问题
ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop 求解决~
急需Hadoop结课作品,谢谢大家!!!
1、分析全球气温数据,并按照气温15度-25度的数据放在part-r-00000分区,其余数据存放在part-r-00001分区中。 2、以part-r-00000分区和part-r-00001分区的数据为输入统计全球每年的最高气温和最低气温,MapReduce输出结果包含年份,最高气温,最低气温,并按照最高气温降序排序,如果气温相同,则按最低气温升序排序。 3、本次MapReduce过程按2000行为一个逻辑分片(map)执行程序,结果输出到/exam+学号,判断该目录是否存在,如存在则先删除再执行mapreduce程序。
hadoop的第一练习,wordcount出错了。。。不知道这是怎么回事求指教。。
有没有hadoop的前辈帮看看怎么回事,编译啊打包啊,都没错,就在最后用hadoop jar只想的时候报了错,我也看不太懂是哪里出的问题。源代码是在网上找的。 ![图片说明](https://img-ask.csdn.net/upload/201602/18/1455788004_97286.png) ![图片说明](https://img-ask.csdn.net/upload/201602/18/1455788026_686440.png) ``` import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class WordCount { public static class WordCountMap extends Mapper<LongWritable, Text, Text, IntWritable> { private final IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer token = new StringTokenizer(line); while (token.hasMoreTokens()) { word.set(token.nextToken()); context.write(word, one); } } } public static class WordCountReduce extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf); job.setJarByClass(WordCount.class); job.setJobName("wordcount"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(WordCountMap.class); job.setReducerClass(WordCountReduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } } ```
hadoop启动JobHistoryServer进程失败
最近在学hadoop,租了一台百度云服务器部署hadoop,启动了NameNode和DataNode还有ResourceManager,在配置完mapred-site.xml后打算启动JobHistoryServer进程看看工作记录,但是配置完ip地址后不是启动成功但打不开history,就是启动失败具体看一下截图: 这张图配置的是我的服务器的ip地址,目前只有一台: ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579569635_360875.png) 配置完后启动日志里会报错,之后进程就退掉了: 报这个错:是端口号被占用: ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579569775_793904.png): 但是我查看这个端口号没有被占用(其他进程已经启动): ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579570143_381425.png): 如果把mapred-site.xmlwe文件里的ip改成下图后就可以成功启动JobHistoryServe进程: ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579570293_948304.png): 进程已经启动了,日志里没有报错: ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579570409_667202.png): 但是点击history后查看不了工作记录:下图 ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579570530_855645.png): 这是点击history后拒绝访问了: ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579570606_198892.png): 我也想过是不是服务器hosts文件的问题,按照下面的配置还是不对(在mared-site.xml中把ip改为hadoop44启动不了): ![图片说明](https://img-ask.csdn.net/upload/202001/21/1579570741_726008.png): 所以这是我遇到的问题,有没有大神解决一下,谢谢
Kafka实战(三) - Kafka的自我修养与定位
Apache Kafka是消息引擎系统,也是一个分布式流处理平台(Distributed Streaming Platform) Kafka是LinkedIn公司内部孵化的项目。LinkedIn最开始有强烈的数据强实时处理方面的需求,其内部的诸多子系统要执行多种类型的数据处理与分析,主要包括业务系统和应用程序性能监控,以及用户行为数据处理等。 遇到的主要问题: 数据正确性不足 数据的收集主要...
volatile 与 synchronize 详解
Java支持多个线程同时访问一个对象或者对象的成员变量,由于每个线程可以拥有这个变量的拷贝(虽然对象以及成员变量分配的内存是在共享内存中的,但是每个执行的线程还是可以拥有一份拷贝,这样做的目的是加速程序的执行,这是现代多核处理器的一个显著特性),所以程序在执行过程中,一个线程看到的变量并不一定是最新的。 volatile 关键字volatile可以用来修饰字段(成员变量),就是告知程序任何对该变量...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
GitHub开源史上最大规模中文知识图谱
近日,一直致力于知识图谱研究的 OwnThink 平台在 Github 上开源了史上最大规模 1.4 亿中文知识图谱,其中数据是以(实体、属性、值),(实体、关系、实体)混合的形式组织,数据格式采用 csv 格式。 到目前为止,OwnThink 项目开放了对话机器人、知识图谱、语义理解、自然语言处理工具。知识图谱融合了两千五百多万的实体,拥有亿级别的实体属性关系,机器人采用了基于知识图谱的语义感...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
微信支付崩溃了,但是更让马化腾和张小龙崩溃的竟然是……
loonggg读完需要3分钟速读仅需1分钟事件还得还原到昨天晚上,10 月 29 日晚上 20:09-21:14 之间,微信支付发生故障,全国微信支付交易无法正常进行。然...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
YouTube排名第一的励志英文演讲《Dream(梦想)》
Idon’t know what that dream is that you have, I don't care how disappointing it might have been as you've been working toward that dream,but that dream that you’re holding in your mind, that it’s po...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问