hadoop集群搭建:Cannot assign requested address

问题:
2016-09-05 16:40:03,320 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.net.BindException: Problem binding to [master:9000] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:719)
at org.apache.hadoop.ipc.Server.bind(Server.java:419)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:561)
at org.apache.hadoop.ipc.Server.(Server.java:2166)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:897)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:505)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:480)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:742)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:311)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:614)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:751)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:735)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)

at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)

采用的是云服务搭建的集群,用的是公网地址。
ifconfig后出现:
eth0 Link encap:Ethernet HWaddr 52:54:00:C6:39:03

inet addr:10.141.88.213 Bcast:10.141.127.255 Mask:255.255.192.0
但我的hosts是:
127.0.0.1 localhost localhost.localdomain
123.207.161.186 master
123.207.145.16 slave1
123.207.140.169 slave2
如果配置私网地址,主机之间ping不通。

2个回答

ip地址重新配置一下,要保证在同一个网段中,例如123.207.161.186,另外的要123.207.161.XXX

集群节点最好在同一个子网内,要不然配置会麻烦很多。此外,hadoop集群最好部署在内网,因为其在安全性方面并没有做充分的设计,其设计之初就假定集群运行在一个安全可信的环境。

y10081403
碧幽月 哦哦,好的,还是多谢!
3 年多之前 回复
hijack00
hijack00 我也只在虚拟机里面搭建过hadoop集群,5个节点,完全分布式
3 年多之前 回复
y10081403
碧幽月 我只是测试,申请了几个服务器,所以配置很麻烦,您有解决办法吗?
3 年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
运行PI实例检查Hadoop集群是否搭建成功时显示“bash:hadoop: command not found”该怎么解决?

运行PI实例检查Hadoop集群是否搭建成功时执行命令: [hfut@master ~]$ cd ~/hadoop-2.5.2/share/hadoop/mapreduce/ [hfut@master mapreduce]$ hadoop jar home/hfut/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 10 10 第二条命令执行完毕后显示bash:hadoop: command not found 如下图所示: ![图片说明](https://img-ask.csdn.net/upload/202001/30/1580370011_357824.png) 请问该怎么解决?

请问各位大神们分布式环境下hadoop集群搭建实训报告怎么写

分布式环境下hadoop集群搭建实训报告怎么写 所用设备 实验原理实验内容及步骤

zookeeper集群搭建问题

2016-10-09 22:36:38,628 [myid:3] - INFO [main:QuorumPeer@1005] - initLimit set to 10 2016-10-09 22:36:38,646 [myid:3] - INFO [Thread-1:QuorumCnxManager$Listener@504] - My election bind port: /123.207.11.*:3883 2016-10-09 22:36:38,647 [myid:3] - ERROR [/123.207.11.*:3883:QuorumCnxManager$Listener@517] - Exception while listening java.net.BindException: Cannot assign requested address at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376) at java.net.ServerSocket.bind(ServerSocket.java:376) at java.net.ServerSocket.bind(ServerSocket.java:330) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:507) 2016-10-09 22:36:38,654 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumPeer@714] - LOOKING 2016-10-09 22:36:38,656 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2183:FastLeaderElection@815] - New election. My id = 3, proposed zxid=0x0 2016-10-09 22:36:38,663 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@382] - Cannot open channel to 1 at election address /123.207.18.*:3881 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430) at java.lang.Thread.run(Thread.java:745) 2016-10-09 22:36:38,667 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@382] - Cannot open channel to 2 at election address /119.29.130.*:3882 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430) at java.lang.Thread.run(Thread.java:745) 2016-10-09 22:36:38,668 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-10-09 22:36:38,870 [myid:3] - WARN [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumCnxManager@382] - Cannot open channel to 2 at election address /119.29.*3882 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762) 2016-10-09 22:36:38,871 [myid:3] - WARN [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumCnxManager@382] - Cannot open channel to 1 at election address /123.207.*:3881 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762) 2016-10-09 22:36:38,872 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2183:FastLeaderElection@849] - Notification time out: 400 2016-10-09 22:36:39,273 [myid:3] - WARN [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumCnxManager@382] - Cannot open channel to 2 at election address /119.29.*:3882 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) 这到底是什么原因 我的zoo.cfg是 server.1=123.207.18.*:2881:3881 server.2=119.29.130.*:2882:3882 server.3=123.207.11.*2883:3883 myid有配置,防火墙关闭了,我的服务器是真实的服务器,腾讯云上的服务器。 腾讯云安全组也配置了。为什么一直报错。难道是腾讯云服务器的问题,有没有 办法解决?求解

Win10上启动Hadoop报 Error: Could not find or load main class PC

我在Windows10上安装了Hadoop 2.9.2 在hadoop-env.cmd中修改了 set JAVA_HOME=C://PROGRA~1/java/jdk1.8.0_191 set HADOOP_PREFIX=E://hadoop-2.9.2 执行Hadoop命令的时候出现奇怪的情况 情况1: 直接执行hadoop命令: ``` E:\hadoop-2.9.2\bin>hadoop Usage: hadoop [--config confdir] [--loglevel loglevel] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file note: please use "yarn jar" to launch YARN applications, not this command. checknative [-a|-h] check native hadoop and compression libraries availability distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive classpath prints the class path needed to get the Hadoop jar and the required libraries credential interact with credential providers key manage keys via the KeyProvider daemonlog get/set the log level for each daemon or CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters. ``` 可以正常执行命令 情况二: 执行hadoop classpath ``` E:\hadoop-2.9.2\bin>hadoop classpath E:\hadoop-2.9.2\etc\hadoop;E:\hadoop-2.9.2\share\hadoop\common\lib\*;E:\hadoop-2.9.2\share\hadoop\common\*;E:\hadoop-2.9.2\share\hadoop\hdfs;E:\hadoop-2.9.2\share\hadoop\hdfs\lib\*;E:\hadoop-2.9.2\share\hadoop\hdfs\*;E:\hadoop-2.9.2\share\hadoop\yarn;E:\hadoop-2.9.2\share\hadoop\yarn\lib\*;E:\hadoop-2.9.2\share\hadoop\yarn\*;E:\hadoop-2.9.2\share\hadoop\mapreduce\lib\*;E:\hadoop-2.9.2\share\hadoop\mapreduce\* ``` 可以正常显示所有的classpath 情况三: **执行hadoop version --出错了** ``` E:\hadoop-2.9.2\bin>hadoop version Error: Could not find or load main class PC ``` 这个就出错了 我查了一些资料,说是classpath不正确,但是我看classpath的路径确实是我安装hadoop的路径,请问各位大神有没有碰到过类似情况的?麻烦分享一下,谢谢。

hadoop集群搭建好后Datenode诡异的再master机器上开启,没有在slave机器上开启

我是一个master,3个slave 问题是这样的: hadoop集群搭建好后在master机器上start-all.sh,结果datanode也在此机器启动没在slaves机器启动 我用的hadoop3.1+jdk1.8,之前照书上搭建的hadoop2.6+openjdk10可以搭建可以 正常启动,namenode等在master上启动,datanode等在slave上启动,现在换了新 版本就不行了,整了一天。。。 目前条件:各个机器能相互ping,也能ssh 都能正常上网 如果一台机器既做master又做slave,可以正常开启50070(当然hadoop3后改成了9870)网页,一切正常 在master上开启start-all.sh时: WARNING: Attempting to start all Apache Hadoop daemons as hduser in 10 seconds. WARNING: This is not a recommended production deployment configuration. WARNING: Use CTRL-C to abort. Starting namenodes on [emaster] Starting datanodes Starting secondary namenodes [emaster] 2018-05-04 22:39:37,858 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting resourcemanager Starting nodemanagers 根本没有去找slave jps后发现: 3233 SecondaryNameNode 3492 ResourceManager 2836 NameNode 3653 NodeManager 3973 Jps 3003 DataNode 全都在master上启动了,slave机器什么也没启动 查看datanode日志,发现它开了3次master的datanode(我的master名字是emaster):(展示部分) STARTUP_MSG: Starting DataNode STARTUP_MSG: host = emaster/192.168.56.100 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.1.0 而且每遍有报错: ``` java.io.EOFException: End of File Exception between local host is: "emaster/192.168.56.100"; destination host is: "emaster":9000; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:789) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495) at org.apache.hadoop.ipc.Client.call(Client.java:1437) at org.apache.hadoop.ipc.Client.call(Client.java:1347) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy20.sendHeartbeat(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:166) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:514) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:645) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:841) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1796) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1165) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1061) 2018-05-04 21:49:43,320 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM ``` 我的猜测,我设置了3台slave,却不知道哪里配置出了问题,使得master不去开启 slave的datanode,却开启自己的datanode,但我实在找不出哪里出错了,什么 masters文件,slaves文件都配了啊,而且各个机器间可以ping通,有大神可以指点下本小白吗,真的万分感谢!!!!

hadoop集群搭建配置了hosts启动不成功

搭建时遇到一个问题我配置了core-site.xml ![图片说明](https://img-ask.csdn.net/upload/201707/12/1499826204_101372.png) 但是报错 ![图片说明](https://img-ask.csdn.net/upload/201707/12/1499826219_340072.png) 一直找不到原因,好人一生平安

hydra Maven通不过 hadoop

[ERROR] Failed to execute goal on project hydra-hbase: Could not resolve dependencies for project com.jd.bdp:hydra-hbase:jar:1.0-SNAPSHOT: The following artifacts could not be resolved: org.apache.hadoop:hadoop-common:jar:bdp-2.0.0-cdh4.1.1, org.apache.hbase:hbase:jar:bdp-0.92.1-cdh4.1.1: Could not find artifact org.apache.hadoop:hadoop-common:jar:bdp-2.0.0-cdh4.1.1 in central (http://repo.maven.apache.org/maven2) -> [Help 1] 这样子报错, 不知道怎么办了, 大神们帮帮忙啊

hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看日志,报错了

hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看namenode的日志信息,报错了, 192.168.100.70:8485: Call From anlulu-1/192.168.100.10 to anlulu-7:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.50:8485: Call From anlulu-1/192.168.100.10 to anlulu-5:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.60:8485: Call From anlulu-1/192.168.100.10 to anlulu-6:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:182) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436) at org.apache.hadoop.hdfs.server.namenode.JournalSet$7.apply(JournalSet.java:590) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:359) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:587) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1330) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1521) at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1399) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1160) at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107) at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 2015-08-20 00:41:50,962 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-08-20 00:41:50,966 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: ``` ```

namenode启动问题日志

java.net.BindException: Problem binding to [vm01:9000] java.net.BindException: Cannot assign requested address;

hadoop 集群 + zookeeper 时:启动journalnode失败

hadoop 集群 + zookeeper 时:启动journalnode失败,提示Error: Cannot find configuration directory: /itcast/hadoop-2.2.0/etc/hadoop

hadoop2.6搭建 格式化出现错误

log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /var/log/hadoop/hadoop/hdfs-audit.log (没有那个文件或目录) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:221) at java.io.FileOutputStream.<init>(FileOutputStream.java:142) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) 15/10/12 16:15:27 WARN namenode.NameNode: Encountered exception during format: java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 FATAL namenode.NameNode: Exception in namenode join java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 INFO util.ExitUtil: Exiting with status 1 15/10/12 16:15:27 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************

hadoop集群,hdfs dfs -ls / 目录出错

搭建了一个hadoop集群,用hdfs dfs -ls /命令,列出的是本地系统的根目录。 用hdfs dfs -ls hdfs://servicename/ 列出的目录才是hdfs上的目录,可能是什么原因? 执行hive创建的目录也是在本地系统目录上。 集群的配置如下 集群规划: 主机名 IP 安装的软件 运行的进程 hadoop01 192.168.175.129 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop02 192.168.175.127 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop03 192.168.175.126 jdk、hadoop ResourceManager hadoop04 192.168.175.125 jdk、hadoop ResourceManager hadoop05 192.168.175.124 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop06 192.168.175.123 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop07 192.168.175.122 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain windows:NLB LINUX:LVS 1.liunx虚拟机安装后,虚拟机连接模式要选择host-only模式。然后分配IP(以hadoop01为例) DEVICE="eth0" BOOTPROTO="static" ### HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c" IPADDR="192.168.175.129" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.175.1" ### 2.修改主机名: vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop01 ### 3.关闭防火墙: #查看防火墙状态 service iptables status #关闭防火墙 service iptables stop #查看防火墙开机启动状态 chkconfig iptables --list #关闭防火墙开机启动 chkconfig iptables off 4.免登录配置: #生成ssh免登陆密钥 #进入到我的home目录 cd ~/.ssh ssh-keygen -t rsa (四个回车) 执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥) 将公钥拷贝到要免登陆的机器上 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 或 若报错ssh-copy-id: ERROR: No identities found,是因为找不到公钥路径,加上-i然后再加上路径即可 则用 $ ssh-copy-id -i ~/.ssh/id_rsa.pub user@remote_ip 5.主机IP映射关系(/etc/hosts每台机器上都要配置全部映射关系) 192.168.175.129 hadoop01 192.168.175.127 hadoop02 192.168.175.126 hadoop03 192.168.175.125 hadoop04 192.168.175.124 hadoop05 192.168.175.123 hadoop06 192.168.175.122 hadoop07 6./etc/profile下配置java环境变量: export JAVA_HOME=/lichangwu/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin #刷新profile source /etc/profile 若版本报错,vi /etc/selinux/config,设置SELINUX=disabled,然后重启虚拟机 7.安装zookeeper: 1.安装配置zooekeeper集群(在hadoop05上): 1.1解压 tar -zxvf zookeeper-3.4.6.tar.gz -C /lichangwu/ 1.2修改配置 cd /lichangwu/zookeeper-3.4.6/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg 修改:dataDir=/lichangwu/zookeeper-3.4.6/tmp 在最后添加: server.1=hadoop05:2888:3888 server.2=hadoop06:2888:3888 server.3=hadoop07:2888:3888 保存退出 然后创建一个tmp文件夹 mkdir /lichangwu/zookeeper-3.4.6/tmp 再创建一个空文件 touch /lichangwu/zookeeper-3.4.6/tmp/myid 最后向该文件写入ID echo 1 > /lichangwu/zookeeper-3.4.6/tmp/myid 1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个lichangwu目录:mkdir /lichangwu) scp -r /lichangwu/zookeeper-3.4.6/ hadoop06:/lichangwu/ scp -r /lichangwu/zookeeper-3.4.6/ hadoop07:/lichangwu/ 注意:修改hadoop06、hadoop07对应/lichangwu/zookeeper-3.4.6/tmp/myid内容 itcast06: echo 2 > /lichangwu/zookeeper-3.4.6/tmp/myid itcast07: echo 3 > /lichangwu/zookeeper-3.4.6/tmp/myid 8.安装配置hadoop集群(在hadoop01上操作): 2.1解压 tar -zxvf hadoop-2.4.1.tar.gz -C /lichangwu/ 2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下) #将hadoop添加到环境变量中 vim /etc/profile export JAVA_HOME=/lichangwu/jdk1.7.0_79 export HADOOP_HOME=/lichangwu/hadoop-2.4.1 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下 cd /lichangwu/hadoop-2.4.1/etc/hadoop 2.2.1修改hadoo-env.sh export JAVA_HOME=/lichangwu/jdk1.7.0_79 2.2.2修改core-site.xml <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/lichangwu/hadoop-2.4.1/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> </configuration> 2.2.3修改hdfs-site.xml <configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop01:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop01:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop02:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop02:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/lichangwu/hadoop-2.4.1/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration> 2.2.4修改mapred-site.xml <configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 2.2.5修改yarn-site.xml <configuration> <!-- 开启RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop04</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 2.2.6修改slaves(slaves是指定子节点的位置,因为要在itcast01上启动HDFS、在itcast03启动yarn, 所以itcast01上的slaves文件指定的是datanode的位置,itcast03上的slaves文件指定的是nodemanager的位置) hadoop05 hadoop06 hadoop07 2.2.7配置免密码登陆 #首先要配置itcast01到hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop01上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点,包括自己 ssh-coyp-id hadoop01 ssh-coyp-id hadoop02 ssh-coyp-id hadoop03 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop03上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop02到hadoop01的免登陆 在hadoop02上生产一对钥匙 ssh-keygen -t rsa ssh-coyp-id -i hadoop01 2.4将配置好的hadoop拷贝到其他节点 scp -r hadoop-2.4.1/ hadoop02:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop03:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop04:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop05:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop06:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop07:/lichangwu/hadoop-2.4.1/ ###注意:严格按照下面的步骤 2.5启动zookeeper集群(分别在hadoop05、hadoop06、hadoop07上启动zk) cd /lichangwu/zookeeper-3.4.6/bin/ ./zkServer.sh start #查看状态:一个leader,两个follower ./zkServer.sh status 2.6启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行) cd /lichangwu/hadoop-2.4.1 sbin/hadoop-daemon.sh start journalnode #运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程 2.7格式化HDFS #在hadoop01上执行命令: hdfs namenode -format #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件, 这里我配置的是/lichangwu/hadoop-2.4.1/tmp,然后将/lichangwu/hadoop-2.4.1/tmp拷贝到hadoop02的/lichangwu/hadoop-2.4.1/下。 scp -r tmp/ hadoop02:/lichangwu/hadoop-2.4.1/ 2.8格式化ZK(在hadoop01上执行即可) hdfs zkfc -formatZK 2.9启动HDFS(在hadoop01上执行) sbin/start-dfs.sh 2.10启动YARN(#####注意#####:是在hadoop03上执行start-yarn.sh, 如果hadoop04上没有启动成功,则在hadoop04上再启动一次start-yarn.sh; 把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动) sbin/start-yarn.sh 到此,hadoop-2.4.1配置完毕,可以统计浏览器访问: http://192.168.175.129:50070 NameNode 'hadoop01:9000' (active) http://192.168.175.127:50070 NameNode 'hadoop02:9000' (standby)

hadoop集群间数据迁移

bin/hadoop distcp hftp://master:50070/user/wp hdfs://ns1/user/ hadoop集群间数据迁移org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.net.SocketTimeoutException: connect timed out

Hadoop集群执行wordcount出现的一些报错信息

我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!

hadoop 运行异常,ReplicaNotFoundException

浏览线上运行日志,发现大量报错信息,截取一条,希望大虾能帮助解决。 May 5, 10:07:30.620 AM ERROR org.apache.hadoop.hdfs.server.datanode.DataNode hadoop-78:50010:DataXceiver error processing READ_BLOCK operation src: /192.0.0.78:34568 dst: /192.0.0.78:50010 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-381875526-172.18.50.76-1450327742712:blk_1075578327_1837535 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:450) at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:234) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:530) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:148) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244) at java.lang.Thread.run(Thread.java:745)

hadoop namenode无法启动

INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: dao:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system... 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: NameNode metrics system stopped. 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 17/03/16 10:54:06 FATAL namenode.NameNode: Failed to start namenode. java.net.BindException: Port in use: dao:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more

hadoop 集群时间同步?

我的hadoop集群时间同步后,为什么一段时间后又不一样了呢? 我已经把时区改为一样了,我是按下面步骤配置的,想问下大佬知道啥原因吗? 1. 修正本地时区及ntp服务 yum -y install ntp rm -rf /etc/localtime ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime /usr/sbin/ntpdate -u pool.ntp.org 2. 自动同步时间 #添加下面一段,表示每10分钟同步一次 crontab -e */10 * * * * /usr/sbin/ntpdate -u pool.ntp.org >/dev/null 2>&1 service crond restart #重启 date #查看时间

hadoop 集群中ResourceManager节点丢失

不小心拿root 打开了集群 后来关掉集群 卡了半天我 Ctrl+c。。后来停掉了 我把里边的hadoopdata 还有日志都删掉,重新format 拿hadoop用户打开集群,其他机子都正常能出来四个, 就主机少一个ResourceManager节点。这是什么情况,怎么弄好呀。![图片说明](https://img-ask.csdn.net/upload/201706/21/1498013635_908846.png)

hadoop 集群数据节点 存储空间不一样会有问题吗?

该Hadoop集群有8台服务器,3台是24t,5台是32t,这种情况有问题吗?有没有优化的参数配置?

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

外包程序员的幸福生活

今天给你们讲述一个外包程序员的幸福生活。男主是Z哥,不是在外包公司上班的那种,是一名自由职业者,接外包项目自己干。接下来讲的都是真人真事。 先给大家介绍一下男主,Z哥,老程序员,是我十多年前的老同事,技术大牛,当过CTO,也创过业。因为我俩都爱好喝酒、踢球,再加上住的距离不算远,所以一直也断断续续的联系着,我对Z哥的状况也有大概了解。 Z哥几年前创业失败,后来他开始干起了外包,利用自己的技术能...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

深入剖析Springboot启动原理的底层源码,再也不怕面试官问了!

大家现在应该都对Springboot很熟悉,但是你对他的启动原理了解吗?

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

你期望月薪4万,出门右拐,不送,这几个点,你也就是个初级的水平

先来看几个问题通过注解的方式注入依赖对象,介绍一下你知道的几种方式@Autowired和@Resource有何区别说一下@Autowired查找候选者的...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

立即提问
相关内容推荐