hadoop单词统计报错Job job_1581768459583_0001 failed

3个节点hadoop01、hadoop02、hadoop03
hadoop01是主节点
hadoop01、hadoop02、hadoop03是从节点,目前集群已搭建好,jps查看三个节点运行都很正常,而且UI也能正常显示,但是使用hadoop自带的hadoop-mapreduce-examples-2.7.4.jar的wordcount进行单词统计时报错如下,请高人指点,看不懂呀:

```[root@hadoop01 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.4.jar wordcount /wordcount/input /wordcount/output
20/02/15 20:14:25 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.233.132:8032
20/02/15 20:14:27 INFO input.FileInputFormat: Total input paths to process : 1
20/02/15 20:14:27 INFO mapreduce.JobSubmitter: number of splits:1
20/02/15 20:14:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1581768459583_0001
20/02/15 20:14:28 INFO impl.YarnClientImpl: Submitted application application_1581768459583_0001
20/02/15 20:14:28 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1581768459583_0001/
20/02/15 20:14:28 INFO mapreduce.Job: Running job: job_1581768459583_0001
20/02/15 20:15:38 INFO mapreduce.Job: Job job_1581768459583_0001 running in uber mode : false
20/02/15 20:15:38 INFO mapreduce.Job: map 0% reduce 0%
20/02/15 20:15:38 INFO mapreduce.Job: Job job_1581768459583_0001 failed with state FAILED due to: Application application_1581768459583_0001 failed 2 times due to Error launching appattempt_1581768459583_0001_000002. Got exception: java.io.IOException: Failed on local exception: java.io.IOException: java.io.IOException: Connection reset by peer; Host Details : local host is: "hadoop01.com/79.124.78.101"; destination host is: "79.124.78.101":43276;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy83.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy84.startContainers(Unknown Source)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: java.io.IOException: Connection reset by peer
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
... 16 more
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
... 19 more
. Failing the application.
20/02/15 20:15:38 INFO mapreduce.Job: Counters: 0

a123073062
a123073062 折腾了一晚上,终于解决,三个虚拟机的主机名我都写的是hadoop01.com,hadoop02.com,hadoop03.com,去掉com就可以了,但是我很奇怪,在/etc/hosts这个配置里只要对应不就好了吗,况且三个虚拟机通过主机名来回复制访问都是没有问题的,为什么必须把域名com去掉才可以呢
8 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
立即提问
相关内容推荐