hadoop 2.6 namenode创建失败

(前面都正常)
2016-03-23 08:30:10,036 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2016-03-23 08:30:10,040 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2016-03-23 08:30:10,140 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2016-03-23 08:30:10,141 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2016-03-23 08:30:10,142 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-03-23 08:30:10,144 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/

1个回答

Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.

写的已经很明确了 没有格式化

u014539992
jiazhuoran 多谢老鸟~新手英语不是很好~以后我先自己看看报错能否看懂~
接近 4 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hadoop2.6搭建 格式化出现错误
log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /var/log/hadoop/hadoop/hdfs-audit.log (没有那个文件或目录) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:221) at java.io.FileOutputStream.<init>(FileOutputStream.java:142) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) 15/10/12 16:15:27 WARN namenode.NameNode: Encountered exception during format: java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 FATAL namenode.NameNode: Exception in namenode join java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 INFO util.ExitUtil: Exiting with status 1 15/10/12 16:15:27 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************
hadoop 2.6.0升级到2.6.3
已经根据官网上的方法进行了操作。 1、下载了hadoop2.6.3 2、rollingUpgrade prepare 3、停止standby的namenode,hadoop2.6.3环境中执行rollingUpgrade started 4、切换active和standy的namenode,新的standby的namenode重复3 5、更新datanode 6、finalized操作 在执行3之后会有很多INFO级别的信息打印,是否不需要理会?而且以此方式启动的namenode没有pid(hadoop/pids文件夹下没有namenode)是不是有问题?且最后更新完datanode执行finalized操作提示没有progress rollingUpgrade。 有朋友升级过么?我的升级操作哪里有问题?
hadoop2.5.2 mapreduce作业失败
``` 16/06/14 03:26:45 INFO client.RMProxy: Connecting to ResourceManager at centos1/192.168.6.132:8032 16/06/14 03:26:47 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 16/06/14 03:26:47 INFO input.FileInputFormat: Total input paths to process : 1 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: number of splits:1 16/06/14 03:26:48 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1465885546873_0002 16/06/14 03:26:49 INFO impl.YarnClientImpl: Submitted application application_1465885546873_0002 16/06/14 03:26:49 INFO mapreduce.Job: The url to track the job: http://centos1:8088/proxy/application_1465885546873_0002/ 16/06/14 03:26:49 INFO mapreduce.Job: Running job: job_1465885546873_0002 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 running in uber mode : false 16/06/14 03:27:10 INFO mapreduce.Job: map 0% reduce 0% 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 failed with state FAILED due to: Application application_1465885546873_0002 failed 2 times due to Error launching appattempt_1465885546873_0002_000002. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:50334 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy32.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:118) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` 然后错误日志如下 ``` 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlaucher.AMLauncher: Setting up container Container: [ContainerId: container_1465885546873_0002_01_000001, NodeId: local:42709, NodeHttpAddress: local:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:42709 }, ] for AM appattempt_1465885546873_0002_000001 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1465885546873_0002_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-14 03:26:50,948 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:51,950 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:52,951 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:53,952 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:54,953 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:55,954 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:56,956 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:57,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:58,959 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,960 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,962 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error launching appattempt_1465885546873_0002_000001. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:42709 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ``` core-site.xml如下 ``` <configuration> <property> <name>ha.zookeeper.quorum</name> <value>centos1:2181,centos2:2181,centos3:2181</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop2.5</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> </configuration> ``` hdfs-site.xml如下 ``` <configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>centos1,centos2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos1</name> <value>centos1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos2</name> <value>centos2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos1</name> <value>centos1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos2</name> <value>centos2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://centos2:8485;centos3:8485;centos4:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop-data</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> ``` yarn-site.xml如下 ``` <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>centos1</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>centos1:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>centos1:8033</value> </property> </configuration> ``` mapred-site.xml如下 ``` <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> ``` slaves如下 ``` centos2 centos3 centos4 ``` hosts如下 ``` 127.0.0.1 local local.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.6.132 centos1 192.168.6.133 centos2 192.168.6.134 centos3 192.168.6.135 centos4 ```
hbase0.98.15+hadoop2.6+zookeeper3.4.6环境搭建问题
系统:centos 7 分布式环境,现有3个节点,主节点node0,hadoop中node0,node1,node2同时作为datanode的节点 启动hbase时主节点没有Hmaster线程 hbase的日志错误如下: ``` 2015-11-13 16:23:59,178 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,node0:2181,node2:2181 sessionTimeout=60000 watcher=master:600000x0, quorum=node1:2181,node0:2181,node2:2181, baseZNode=/hbase 2015-11-13 16:23:59,195 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Opening socket connection to server node1/192.168.0.161:2181. Will not attempt to authenticate using SASL (unknown error) 2015-11-13 16:23:59,200 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Socket connection established to node1/192.168.0.161:2181, initiating session 2015-11-13 16:23:59,211 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Session establishment complete on server node1/192.168.0.161:2181, sessionid = 0x150feccba590005, negotiated timeout = 40000 2015-11-13 16:23:59,223 INFO [main] zookeeper.RecoverableZooKeeper: Node /hbase already exists and this is not a retry 2015-11-13 16:23:59,245 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting 2015-11-13 16:23:59,246 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: starting 2015-11-13 16:23:59,301 INFO [master:namenode:60000] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2015-11-13 16:23:59,344 INFO [master:namenode:60000] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2015-11-13 16:23:59,346 INFO [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2015-11-13 16:23:59,346 INFO [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-13 16:23:59,355 INFO [master:namenode:60000] http.HttpServer: Jetty bound to port 60010 2015-11-13 16:23:59,355 INFO [master:namenode:60000] mortbay.log: jetty-6.1.26 2015-11-13 16:23:59,656 INFO [master:namenode:60000] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 2015-11-13 16:23:59,724 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available 2015-11-13 16:23:59,725 INFO [master:namenode:60000] master.ActiveMasterManager: Registered Active Master=namenode,60000,1447403038605 2015-11-13 16:23:59,731 INFO [master:namenode:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 2015-11-13 16:23:59,875 FATAL [master:namenode:60000] master.HMaster: Unhandled exception. Starting shutdown. java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.<init>(Ljava/util/zip/Checksum;II)V at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1342) at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371) at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1403) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1382) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1307) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:384) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:380) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:380) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:324) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775) at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:664) at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:642) at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:599) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:481) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:154) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684) at java.lang.Thread.run(Thread.java:745) 2015-11-13 16:23:59,876 INFO [master:namenode:60000] master.HMaster: Aborting 2015-11-13 16:23:59,877 DEBUG [master:namenode:60000] master.HMaster: Stopping service threads 2015-11-13 16:23:59,877 INFO [master:namenode:60000] ipc.RpcServer: Stopping server on 60000 2015-11-13 16:23:59,877 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping 2015-11-13 16:23:59,877 INFO [master:namenode:60000] master.HMaster: Stopping infoServer 2015-11-13 16:23:59,877 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped 2015-11-13 16:23:59,877 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 2015-11-13 16:23:59,879 INFO [master:namenode:60000] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 2015-11-13 16:23:59,994 INFO [master:namenode:60000] zookeeper.ZooKeeper: Session: 0x150feccba590005 closed 2015-11-13 16:23:59,994 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2015-11-13 16:23:59,994 INFO [master:namenode:60000] master.HMaster: HMaster main thread exiting 2015-11-13 16:23:59,995 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:201) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3062) ``` 每个节点上hadoop和zookeeper的进程都有,hregionserver进程也有,就是主节点没有hmaster,配置文件什么的已经对过很多遍了 另外有一个奇怪的问题是,每次系统重启过后,启动hadoop,可以看到相应的进程,但是在外部无法访问http://node0:50070端口,必须要输入iptables -F才可以访问,zookeeper启动过后,如果没有输入过上述命令,zookeeper的MODE也显示不出来,也就是集群模式没有启动成功
hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看日志,报错了
hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看namenode的日志信息,报错了, 192.168.100.70:8485: Call From anlulu-1/192.168.100.10 to anlulu-7:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.50:8485: Call From anlulu-1/192.168.100.10 to anlulu-5:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.60:8485: Call From anlulu-1/192.168.100.10 to anlulu-6:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:182) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436) at org.apache.hadoop.hdfs.server.namenode.JournalSet$7.apply(JournalSet.java:590) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:359) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:587) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1330) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1521) at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1399) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1160) at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107) at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 2015-08-20 00:41:50,962 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-08-20 00:41:50,966 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: ``` ```
Hadoop 2.6.2 Configuration
hadoop$ sbin/start-dfs.sh 然后报错,求高手解决啊 Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. Starting namenodes on [OpenJDK Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 2015-11-27 04:47:02,913 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable] Error: Cannot find configuration directory: /etc/hadoop Error: Cannot find configuration directory: /etc/hadoop Starting secondary namenodes [OpenJDK Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 2015-11-27 04:47:04,891 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 0.0.0.0] Error: Cannot find configuration directory: /etc/hadoop
重新启动hadoop失败 webapps/hdfs not found in CLASSPATH
最近需要在16节点配置hbase 环境:hadoop2.6+zookeeper3.4.6+hbase0.98.9 ubuntu12.04 server 64bit jdk 1.8.0_11 hadoop 配置的1个namenode和15个datanode zookeeper配置的3台 hbase一个Hmaster15个HRegion 开始时配置没问题,在hbase上跑东西也没问题 需要改变节点数量,所以关闭hbase,然后关闭zookeeper,最后关闭hadoop集群 不过我没有format那个hadoop的namenode 然后我就遇到hadoop的namenode起不起来的问题 请问有没有人对这个问题有研究啊? 我看hdfs的启动日志写的是: java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH 在网上也没看到类似问题和解决方案 怀疑是hbase集群关闭时是不是需要format 或者配置文件哪里不对 我查了关于classpath的hadoop配置文件,只有一个Yarn-site.xml的yarn.application.classpath那个我是注释掉的,应该也不是这个问题的吧 尝试重新format过namenode节点,重启还是不行 尝试过重新把hadoop删掉重新配置,不行 尝试改hdfs的启动脚本,在运行时手动export加上webapps/hdfs文件夹,不行 请问各位有没有解决办法啊?或者这个问题的大致思路? 启动日志如下:${HADOOP_HOME}//ogs/hadoop-ubuntu-namenode-m1.log ``` 2015-02-01 22:45:43,870 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = m1/172.18.9.100 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.2.0 STARTUP_MSG: classpath = /home/ubuntu/hadoop/hadoop-2.6.0/etc/hadoop:/home/ubuntu/hadoop/hadoop-2.6.0/share/hadoop/common/lib/asm-3$ STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z STARTUP_MSG: java = 1.8.0_11 ************************************************************/ 2015-02-01 22:45:43,887 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2015-02-01 22:45:44,515 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2015-02-01 22:45:44,670 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2015-02-01 22:45:44,671 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2015-02-01 22:45:50,066 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4j$ 2015-02-01 22:45:50,069 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2015-02-01 22:45:50,070 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2015-02-01 22:45:50,070 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2015-02-01 22:45:50,071 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join **java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH **** at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:585) at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:251) at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:174) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.<init>(NameNodeHttpServer.java:76) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:74) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:626) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:488) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 2015-02-01 22:45:50,075 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-02-01 22:45:50,078 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at m1/172.18.9.100 ************************************************************/ ```
hadoop集群搭建:Cannot assign requested address
问题: 2016-09-05 16:40:03,320 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join java.net.BindException: Problem binding to [master:9000] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:719) at org.apache.hadoop.ipc.Server.bind(Server.java:419) at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:561) at org.apache.hadoop.ipc.Server.<init>(Server.java:2166) at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:897) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:505) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:480) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:742) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:311) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:587) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) -------------------------------------------------------------------------------------------------- 采用的是云服务搭建的集群,用的是公网地址。 ifconfig后出现: eth0 Link encap:Ethernet HWaddr 52:54:00:C6:39:03 inet addr:10.141.88.213 Bcast:10.141.127.255 Mask:255.255.192.0 但我的hosts是: 127.0.0.1 localhost localhost.localdomain 123.207.161.186 master 123.207.145.16 slave1 123.207.140.169 slave2 如果配置私网地址,主机之间ping不通。
hadoop机架感知不能加载类
我已经在core-site.xml配置文件中配置了net.topology.node.switch.mapping.impl。并把jar包放在了/opt/modules/hadoop-2.7.2/share/hadoop/common/lib目录。启动后出现如下错误。hadoop哪个目录可以启动时加载jar包呢? java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class >com.learning.rackawareness.RackAwareness not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2227) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.<init>(DatanodeManager.java:208) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:268) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:737) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:246) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671) Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class >com.learning.rackawareness.RackAwareness not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2219) ... 6 more Caused by: java.lang.ClassNotFoundException: Class >com.learning.rackawareness.RackAwareness not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) ... 7 more 2019-10-07 15:23:41,958 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state 2019-10-07 15:23:41,970 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: SHUTDOWN_MSG:
hadoop2.x集群部署一种一个datanode无法启动
Exception in secureMain java.net.UnknownHostException: node1: node1 at java.net.InetAddress.getLocalHost(InetAddress.java:1473) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:187) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:207) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2153) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402) Caused by: java.net.UnknownHostException: node1 at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 6 more 2015-01-16 09:08:54,152 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-01-16 09:08:54,164 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: node1: node1 ************************************************************/ 环境ubuntu,hadoop2.6,jdk7 [排比句](http://www.zaojuzi.com/paibiju/ "")部署三台虚拟机一台namenode,两台datanode;/etc/hostname 都已经配置分布为master,node1,node2 /etc/hosts配置为: 27.0.0.1 localhost 127.0.1.1 ubuntu.localdomain ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.184.129 master 192.168.184.130 node1 192.168.184.131 node2 hadoop/etc/hadoo/slaves配置为[造句](http://www.zaojuzi.com/ ""): node1 node2 core-site.xml配置为: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yangwq/hadoop-2.6.0/temp</value> <description>A base for other temporary directories.</description> </property> </configuration> hdfs-site.xml配置为: <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/data</value> </property> </configuration> mapred-site.xml配置为: <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> </configuration> yarn-site.xml配置为: <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!-- resourcemanager hostname或ip地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> </configuration> 在启动的时候node1节点的datanode一直无法启动,同时通过ssh登录各节点都是正常。
hadoop mapreduce报错
java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy31.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy32.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448) Job Submission failed with exception 'java.lang.RuntimeException(Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
怎么让配置HA的hadoop启动时启动固定的namenode
hadoop2.6配了高可用,但是启动时namenode是随机active的,有没有办法启动hadoop时能指定固定的namenode,挂了才启用备用namenode
Hive中向表中插入数据出错,请大神们帮忙解决一下,谢谢了
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = root_20190413230214_49834037-bff2-4fb9-b935-94eff9780c09 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/root/998d79c6-e750-4680-a08d-efc1714952ef/hive_2019-04-13_23-02-14_924_6242126447268871177-1/-mr-10004/b7f8c78b-2559-4e30-b053-ae37b4f49a05/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1814) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2569) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:846) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:624) at org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:549) at org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:541) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:350) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:244) at org.apache.hadoop.util.RunJar.main(RunJar.java:158) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/root/998d79c6-e750-4680-a08d-efc1714952ef/hive_2019-04-13_23-02-14_924_6242126447268871177-1/-mr-10004/b7f8c78b-2559-4e30-b053-ae37b4f49a05/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1814) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2569) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:846) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1507) at org.apache.hadoop.ipc.Client.call(Client.java:1453) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy29.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy30.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1845) at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710) Job Submission failed with exception 'java.lang.RuntimeException(Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/root/998d79c6-e750-4680-a08d-efc1714952ef/hive_2019-04-13_23-02-14_924_6242126447268871177-1/-mr-10004/b7f8c78b-2559-4e30-b053-ae37b4f49a05/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1814) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2569) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:846) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/root/998d79c6-e750-4680-a08d-efc1714952ef/hive_2019-04-13_23-02-14_924_6242126447268871177-1/-mr-10004/b7f8c78b-2559-4e30-b053-ae37b4f49a05/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1814) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2569) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:846) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
hadoop 格式化namenode报错
ubuntu 16.04 hadoop 2.6.0 初始化 hadoop namenode -format 报错说找不到namenode,但是我的环境变量配置的好像没问题啊。。 ![图片说明](https://img-ask.csdn.net/upload/201609/11/1473599781_110258.png) ![图片说明](https://img-ask.csdn.net/upload/201609/11/1473599831_458409.png) 求大神赐教
HDFS启动警告 WARN util.NativeCodeLoader
启动HDFS报入校警告: [hadoop@hadoop tmp]$ start-dfs.sh 15/02/02 20:39:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-namenode-hadoop.out localhost: starting datanode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-datanode-hadoop.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-hadoop.out 15/02/02 20:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 网上查说是liunx版本问题 [hadoop@hadoop ~]$ uname -a Linux hadoop 2.6.32-431.el6.i686 #1 SMP Fri Nov 22 00:26:36 UTC 2013 i686 i686 i386 GNU/Linux JDK版本 [hadoop@hadoop ~]$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) Client VM (build 19.1-b02, mixed mode, sharing) Hadoop版本: [hadoop@hadoop ~]$ hadoop version Hadoop 2.5.2 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0 Compiled by jenkins on 2014-11-14T23:45Z Compiled with protoc 2.5.0 From source with checksum df7537a4faa4658983d397abf4514320 This command was run using /usr/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar 请高手帮忙看下
新手,hadoop上运行wordcount程序报错
运行的环境是:Ubuntu14.04+hadoop2.6.1 用的是virtualBox虚拟机,然后安装了一个master和三个slave节点 hadoop是可以成功启动的,没有任何问题 在Ubuntu安装了eclipse,用java写了word count的程序,源码如下: ``` package wordcount; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; /** * @author * @version 创建时间:2017年9月9日 上午8:50:51 类说明 */ public class Wordcount { public static class TokenizerMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { StringTokenizer line = new StringTokenizer(value.toString()); while (line.hasMoreTokens()) { word.set(line.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); protected void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable obj : values) { sum += obj.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(Wordcount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/user/hduser/demo/test.txt")); FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/user/hduser/demo/wordcount")); //FileInputFormat.addInputPath(job, new Path(args[0])); //FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 启动hadoop后,在eclipse中直接运行上面的程序,运行成功,生成了wordcount文件夹,里面有_SUCCESS文件,也有统计的结果文件 然后我想把程序打包成jar文件来运行,先把上面程序中的: ``` FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/user/hduser/demo/test.txt")); FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/user/hduser/demo/wordcount")); ``` 改成如下: ``` FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); ``` 就是通过终端输入这两个参数 用eclipse的export打包成jar文件,后在终端输入: ``` hadoop jar wordcount.jar wordcount.Wordcount hdfs://master:9000/user/hduser/demo/test.txt hdfs://master:9000/user/hduser/demo/wordcount ``` 运行就报错了,报错情况如下: ``` 17/09/09 11:18:53 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.56.100:8050 17/09/09 11:18:54 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 17/09/09 11:18:55 INFO input.FileInputFormat: Total input paths to process : 1 17/09/09 11:18:55 INFO mapreduce.JobSubmitter: number of splits:1 17/09/09 11:18:55 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1504926710828_0001 17/09/09 11:18:56 INFO impl.YarnClientImpl: Submitted application application_1504926710828_0001 17/09/09 11:18:56 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1504926710828_0001/ 17/09/09 11:18:56 INFO mapreduce.Job: Running job: job_1504926710828_0001 17/09/09 11:19:14 INFO mapreduce.Job: Job job_1504926710828_0001 running in uber mode : false 17/09/09 11:19:14 INFO mapreduce.Job: map 0% reduce 0% 17/09/09 11:19:14 INFO mapreduce.Job: Job job_1504926710828_0001 failed with state FAILED due to: Application application_1504926710828_0001 failed 2 times due to AM Container for appattempt_1504926710828_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://master:8088/proxy/application_1504926710828_0001/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1504926710828_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) at java.lang.Thread.run(Thread.java:748) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 17/09/09 11:19:14 INFO mapreduce.Job: Counters: 0 ``` 去查了下日志文件, ``` 2017-09-09 11:18:55,869 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job.xml is closed by DFSClient_NONMAPREDUCE_-1306163227_1 2017-09-09 11:18:59,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2017-09-09 11:18:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2017-09-09 11:19:12,241 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.56.102:53610 Call#7 Retry#0: java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist 2017-09-09 11:19:12,293 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.56.102:53610 Call#8 Retry#0: java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist 2017-09-09 11:19:29,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2017-09-09 11:19:29,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.56.100 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 29 2017-09-09 11:19:42,635 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 40 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 27 SyncTimes(ms): 545 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 40 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 28 SyncTimes(ms): 613 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_inprogress_0000000000000000029 -> /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_0000000000000000029-0000000000000000068 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 69 2017-09-09 11:19:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2017-09-09 11:19:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:20:29,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2017-09-09 11:20:29,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.56.100 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 69 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 24 2017-09-09 11:20:42,791 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 56 ``` 在这里面报了一个错误: ``` java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist ``` 新手,不知道怎么办了
小白求助 :hadoop集群slaves上的NodeManager进程自动断
s100 master s101 slaves s102 slaves s103 slaves namenode -format 后start-all.sh 可以在slaves 看到 NodeManager 进程 但是在webui上 通过http://S100:50070看不到datenode节点信息 查看节点slaves上的 yarn-root-nodemanager-S101.log日志发现有报错信息 ``` 2019-06-01 02:20:02,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:03,596 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:04,603 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:05,609 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:06,611 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:07,616 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,624 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Unexpected error starting NodeStatusUpdater java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,632 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,638 INFO org.apache.hadoop.service.AbstractService: Service NodeManager failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,716 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042 2019-06-01 02:20:08,727 INFO org.apache.hadoop.ipc.Server: Stopping server on 39863 2019-06-01 02:20:08,742 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 39863 2019-06-01 02:20:08,748 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,749 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is interrupted. Exiting. 2019-06-01 02:20:08,790 INFO org.apache.hadoop.ipc.Server: Stopping server on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,836 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Public cache exiting 2019-06-01 02:20:08,840 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system... 2019-06-01 02:20:08,842 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped. 2019-06-01 02:20:08,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete. 2019-06-01 02:20:08,843 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,857 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NodeManager at S101/127.0.0.1 ************************************************************/ ``` 哪位大佬帮忙看看,小弟先谢过 如下是slaves 的配置 yarn-site.xml ``` <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>S100</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/disk1/nm-local-dir,/disk2/nm-local-dir</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>16384</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>16</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>S100:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>S100:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>S100:8031</value> </property> </configuration> ``` core-site.xml ``` <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://S100:8020/</value> </property> </configuration> ```
Hadoop搭建环境报错,实在找不到方法
找了很久都不知道解决方法,大家帮帮我吧,谢谢~ STARTUP_MSG: java = 1.6.0_45 ************************************************************/ 15/11/28 08:52:23 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 15/11/28 08:52:23 INFO namenode.NameNode: createNameNode [-format] 15/11/28 08:52:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-0f5cc549-bfbb-45b2-a735-c3500a2e83cf 15/11/28 08:52:25 INFO namenode.FSNamesystem: fsLock is fair:true 15/11/28 08:52:25 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 15/11/28 08:52:25 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 15/11/28 08:52:25 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 15/11/28 08:52:25 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Nov 28 08:52:25 15/11/28 08:52:25 INFO util.GSet: Computing capacity for map BlocksMap 15/11/28 08:52:25 INFO util.GSet: VM type = 32-bit 15/11/28 08:52:25 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 15/11/28 08:52:25 INFO util.GSet: capacity = 2^22 = 4194304 entries 15/11/28 08:52:25 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 15/11/28 08:52:25 INFO blockmanagement.BlockManager: defaultReplication = 1 15/11/28 08:52:25 INFO blockmanagement.BlockManager: maxReplication = 512 15/11/28 08:52:25 INFO blockmanagement.BlockManager: minReplication = 1 15/11/28 08:52:25 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 15/11/28 08:52:25 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 15/11/28 08:52:25 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 15/11/28 08:52:25 INFO blockmanagement.BlockManager: encryptDataTransfer = false 15/11/28 08:52:25 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 15/11/28 08:52:25 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 15/11/28 08:52:25 INFO namenode.FSNamesystem: supergroup = supergroup 15/11/28 08:52:25 INFO namenode.FSNamesystem: isPermissionEnabled = false 15/11/28 08:52:25 INFO namenode.FSNamesystem: HA Enabled: false 15/11/28 08:52:25 INFO namenode.FSNamesystem: Append Enabled: true 15/11/28 08:52:26 INFO util.GSet: Computing capacity for map INodeMap 15/11/28 08:52:26 INFO util.GSet: VM type = 32-bit 15/11/28 08:52:26 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 15/11/28 08:52:26 INFO util.GSet: capacity = 2^21 = 2097152 entries 15/11/28 08:52:26 INFO namenode.NameNode: Caching file names occuring more than 10 times 15/11/28 08:52:26 INFO util.GSet: Computing capacity for map cachedBlocks 15/11/28 08:52:26 INFO util.GSet: VM type = 32-bit 15/11/28 08:52:26 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 15/11/28 08:52:26 INFO util.GSet: capacity = 2^19 = 524288 entries 15/11/28 08:52:26 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 15/11/28 08:52:26 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 15/11/28 08:52:26 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 15/11/28 08:52:26 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 15/11/28 08:52:26 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 15/11/28 08:52:26 INFO util.GSet: Computing capacity for map NameNodeRetryCache 15/11/28 08:52:26 INFO util.GSet: VM type = 32-bit 15/11/28 08:52:26 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 15/11/28 08:52:26 INFO util.GSet: capacity = 2^16 = 65536 entries 15/11/28 08:52:26 INFO namenode.NNConf: ACLs enabled? false 15/11/28 08:52:26 INFO namenode.NNConf: XAttrs enabled? true 15/11/28 08:52:26 INFO namenode.NNConf: Maximum size of an xattr: 16384 15/11/28 08:52:26 FATAL namenode.NameNode: Exception in namenode join java.lang.IllegalArgumentException: URI has an authority component at java.io.File.<init>(File.java:368) at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:327) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:261) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:233) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:920) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) 15/11/28 08:52:26 INFO util.ExitUtil: Exiting with status 1 15/11/28 08:52:26 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.131.2 ************************************************************/
hadoop集群,hdfs dfs -ls / 目录出错
搭建了一个hadoop集群,用hdfs dfs -ls /命令,列出的是本地系统的根目录。 用hdfs dfs -ls hdfs://servicename/ 列出的目录才是hdfs上的目录,可能是什么原因? 执行hive创建的目录也是在本地系统目录上。 集群的配置如下 集群规划: 主机名 IP 安装的软件 运行的进程 hadoop01 192.168.175.129 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop02 192.168.175.127 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop03 192.168.175.126 jdk、hadoop ResourceManager hadoop04 192.168.175.125 jdk、hadoop ResourceManager hadoop05 192.168.175.124 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop06 192.168.175.123 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop07 192.168.175.122 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain windows:NLB LINUX:LVS 1.liunx虚拟机安装后,虚拟机连接模式要选择host-only模式。然后分配IP(以hadoop01为例) DEVICE="eth0" BOOTPROTO="static" ### HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c" IPADDR="192.168.175.129" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.175.1" ### 2.修改主机名: vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop01 ### 3.关闭防火墙: #查看防火墙状态 service iptables status #关闭防火墙 service iptables stop #查看防火墙开机启动状态 chkconfig iptables --list #关闭防火墙开机启动 chkconfig iptables off 4.免登录配置: #生成ssh免登陆密钥 #进入到我的home目录 cd ~/.ssh ssh-keygen -t rsa (四个回车) 执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥) 将公钥拷贝到要免登陆的机器上 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 或 若报错ssh-copy-id: ERROR: No identities found,是因为找不到公钥路径,加上-i然后再加上路径即可 则用 $ ssh-copy-id -i ~/.ssh/id_rsa.pub user@remote_ip 5.主机IP映射关系(/etc/hosts每台机器上都要配置全部映射关系) 192.168.175.129 hadoop01 192.168.175.127 hadoop02 192.168.175.126 hadoop03 192.168.175.125 hadoop04 192.168.175.124 hadoop05 192.168.175.123 hadoop06 192.168.175.122 hadoop07 6./etc/profile下配置java环境变量: export JAVA_HOME=/lichangwu/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin #刷新profile source /etc/profile 若版本报错,vi /etc/selinux/config,设置SELINUX=disabled,然后重启虚拟机 7.安装zookeeper: 1.安装配置zooekeeper集群(在hadoop05上): 1.1解压 tar -zxvf zookeeper-3.4.6.tar.gz -C /lichangwu/ 1.2修改配置 cd /lichangwu/zookeeper-3.4.6/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg 修改:dataDir=/lichangwu/zookeeper-3.4.6/tmp 在最后添加: server.1=hadoop05:2888:3888 server.2=hadoop06:2888:3888 server.3=hadoop07:2888:3888 保存退出 然后创建一个tmp文件夹 mkdir /lichangwu/zookeeper-3.4.6/tmp 再创建一个空文件 touch /lichangwu/zookeeper-3.4.6/tmp/myid 最后向该文件写入ID echo 1 > /lichangwu/zookeeper-3.4.6/tmp/myid 1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个lichangwu目录:mkdir /lichangwu) scp -r /lichangwu/zookeeper-3.4.6/ hadoop06:/lichangwu/ scp -r /lichangwu/zookeeper-3.4.6/ hadoop07:/lichangwu/ 注意:修改hadoop06、hadoop07对应/lichangwu/zookeeper-3.4.6/tmp/myid内容 itcast06: echo 2 > /lichangwu/zookeeper-3.4.6/tmp/myid itcast07: echo 3 > /lichangwu/zookeeper-3.4.6/tmp/myid 8.安装配置hadoop集群(在hadoop01上操作): 2.1解压 tar -zxvf hadoop-2.4.1.tar.gz -C /lichangwu/ 2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下) #将hadoop添加到环境变量中 vim /etc/profile export JAVA_HOME=/lichangwu/jdk1.7.0_79 export HADOOP_HOME=/lichangwu/hadoop-2.4.1 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下 cd /lichangwu/hadoop-2.4.1/etc/hadoop 2.2.1修改hadoo-env.sh export JAVA_HOME=/lichangwu/jdk1.7.0_79 2.2.2修改core-site.xml <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/lichangwu/hadoop-2.4.1/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> </configuration> 2.2.3修改hdfs-site.xml <configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop01:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop01:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop02:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop02:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/lichangwu/hadoop-2.4.1/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration> 2.2.4修改mapred-site.xml <configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 2.2.5修改yarn-site.xml <configuration> <!-- 开启RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop04</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 2.2.6修改slaves(slaves是指定子节点的位置,因为要在itcast01上启动HDFS、在itcast03启动yarn, 所以itcast01上的slaves文件指定的是datanode的位置,itcast03上的slaves文件指定的是nodemanager的位置) hadoop05 hadoop06 hadoop07 2.2.7配置免密码登陆 #首先要配置itcast01到hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop01上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点,包括自己 ssh-coyp-id hadoop01 ssh-coyp-id hadoop02 ssh-coyp-id hadoop03 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop03上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop02到hadoop01的免登陆 在hadoop02上生产一对钥匙 ssh-keygen -t rsa ssh-coyp-id -i hadoop01 2.4将配置好的hadoop拷贝到其他节点 scp -r hadoop-2.4.1/ hadoop02:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop03:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop04:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop05:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop06:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop07:/lichangwu/hadoop-2.4.1/ ###注意:严格按照下面的步骤 2.5启动zookeeper集群(分别在hadoop05、hadoop06、hadoop07上启动zk) cd /lichangwu/zookeeper-3.4.6/bin/ ./zkServer.sh start #查看状态:一个leader,两个follower ./zkServer.sh status 2.6启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行) cd /lichangwu/hadoop-2.4.1 sbin/hadoop-daemon.sh start journalnode #运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程 2.7格式化HDFS #在hadoop01上执行命令: hdfs namenode -format #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件, 这里我配置的是/lichangwu/hadoop-2.4.1/tmp,然后将/lichangwu/hadoop-2.4.1/tmp拷贝到hadoop02的/lichangwu/hadoop-2.4.1/下。 scp -r tmp/ hadoop02:/lichangwu/hadoop-2.4.1/ 2.8格式化ZK(在hadoop01上执行即可) hdfs zkfc -formatZK 2.9启动HDFS(在hadoop01上执行) sbin/start-dfs.sh 2.10启动YARN(#####注意#####:是在hadoop03上执行start-yarn.sh, 如果hadoop04上没有启动成功,则在hadoop04上再启动一次start-yarn.sh; 把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动) sbin/start-yarn.sh 到此,hadoop-2.4.1配置完毕,可以统计浏览器访问: http://192.168.175.129:50070 NameNode 'hadoop01:9000' (active) http://192.168.175.127:50070 NameNode 'hadoop02:9000' (standby)
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
这30个CSS选择器,你必须熟记(上)
关注前端达人,与你共同进步CSS的魅力就是让我们前端工程师像设计师一样进行网页的设计,我们能轻而易举的改变颜色、布局、制作出漂亮的影音效果等等,我们只需要改几行代码,不需...
国产开源API网关项目进入Apache孵化器:APISIX
点击蓝色“程序猿DD”关注我回复“资源”获取独家整理的学习资料!近日,又有一个开源项目加入了这个Java开源界大名鼎鼎的Apache基金会,开始进行孵化器。项目名称:AP...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
编写Spring MVC控制器的14个技巧
本期目录 1.使用@Controller构造型 2.实现控制器接口 3.扩展AbstractController类 4.为处理程序方法指定URL映射 5.为处理程序方法指定HTTP请求方法 6.将请求参数映射到处理程序方法 7.返回模型和视图 8.将对象放入模型 9.处理程序方法中的重定向 10.处理表格提交和表格验证 11.处理文件上传 12.在控制器中自动装配业务类 ...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
求小姐姐抠图竟遭白眼?痛定思痛,我决定用 Python 自力更生!
点击蓝色“Python空间”关注我丫加个“星标”,每天一起快乐的学习大家好,我是 Rocky0429,一个刚恰完午饭,正在用刷网页浪费生命的蒟蒻...一堆堆无聊八卦信息的网页内容慢慢使我的双眼模糊,一个哈欠打出了三斤老泪,就在此时我看到了一张图片:是谁!是谁把我女朋友的照片放出来的!awsl!太好看了叭...等等,那个背景上的一堆鬼画符是什么鬼?!真是看不下去!叔叔婶婶能忍,隔壁老王的三姨妈的四表...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
相关热词 c# 图片上传 c# gdi 占用内存 c#中遍历字典 c#控制台模拟dos c# 斜率 最小二乘法 c#进程延迟 c# mysql完整项目 c# grid 总行数 c# web浏览器插件 c# xml 生成xsd
立即提问