hadoop集群启动后网页上看不到datanode节点信息 5C

图片说明
node82作为namenode节点,node81,node80,node79作为datanode,jps显示都是启动的,可以登陆网页却看不到datanode节点信息。
图片说明

图片说明

5个回答

qq_25948717
大鱼-瓶邪 试过那个方法,不行
大约 2 年之前 回复

还是没有启动成功啊,看下日志吧

我也遇到这样的错误,是由于子节点./etc/hosts的映射写错,你可以查看该文件中的映射,看是否写的是服务器的外网IP

刚看到。重新初始化namenode元数据就可以了

看一下几个cluster 的日志肯定是有报错的

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hadoop集群中,有一个节点Jps看不到datanode ,但在web UI上正常,这是什么 原因 怎么解决?

问题如标题 hadoop dfsadmin -report 的结果如下 ``` DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 19/05/30 09:31:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 151164198912 (140.78 GB) Present Capacity: 135626715136 (126.31 GB) DFS Remaining: 135626620928 (126.31 GB) DFS Used: 94208 (92 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (3): Name: 192.168.182.101:50010 (master) Hostname: master Decommission Status : Normal Configured Capacity: 50388066304 (46.93 GB) DFS Used: 32768 (32 KB) Non DFS Used: 2861187072 (2.66 GB) DFS Remaining: 44960407552 (41.87 GB) DFS Used%: 0.00% DFS Remaining%: 89.23% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 30 09:31:11 CST 2019 Name: 192.168.182.102:50010 (node01) Hostname: node01 Decommission Status : Normal Configured Capacity: 50388066304 (46.93 GB) DFS Used: 28672 (28 KB) Non DFS Used: 2534232064 (2.36 GB) DFS Remaining: 45287366656 (42.18 GB) DFS Used%: 0.00% DFS Remaining%: 89.88% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 30 09:31:09 CST 2019 Name: 192.168.182.103:50010 (node02) Hostname: node02 Decommission Status : Normal Configured Capacity: 50388066304 (46.93 GB) DFS Used: 32768 (32 KB) Non DFS Used: 2442747904 (2.27 GB) DFS Remaining: 45378846720 (42.26 GB) DFS Used%: 0.00% DFS Remaining%: 90.06% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 30 09:31:11 CST 2019 ``` 以下是三个节点的jps结果 ![master](https://img-ask.csdn.net/upload/201905/30/1559180151_866479.png) 就这台node01节点jps不到datanode ![node01](https://img-ask.csdn.net/upload/201905/30/1559180180_666279.png) | ![node02](https://img-ask.csdn.net/upload/201905/30/1559180191_598391.png) 但在Web UI上又能看到 这个有谁了解吗 什么原因 有解决方案吗 ![web UI](https://img-ask.csdn.net/upload/201905/30/1559180236_15578.png)

hadoop2.6.5集群master启动时只能启动自身作为datanode,slave节点无法控制且没有日志?

1个master,2个slave,多次格式化删除/usr/local/hadoop/logs与/tmp无效 报错日志显示: 2019-05-22 15:43:46,212 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: Master/219.226.109.130:9000 但是所有节点防火墙均为关闭状态 /etc/hosts中配置均为: #127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 219.226.109.130 Master 219.226.109.129 Slave1 219.226.109.131 Slave2

hadoop集群datanode启动成功,jps中无datanode

这是master中jps的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635422_43286.png) 这个是slave中jps的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635450_918763.png) 这个是50070页面 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635471_368841.png) hadoop dfsadmin -report的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635512_251415.png) master和slave的VERSION信息 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635571_921496.png) ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635586_259196.png) 有没有大神知道这是什么问题

hadoop伪分布式集群启动后datanode文件夹下无数据?

学习hadoop的时候,启动hadoop伪分布式集群的时候,启动过程是正常的,没有报错。jps查看也是显示正常运行。namenode的name目录是正常的。但是在/opt/module/hadoop-2.7.2/data/tmp/dfs/data目录(datanode的目录)下没有内容。 - namenode和datanode是启动了的。 ![图片说明](https://img-ask.csdn.net/upload/202003/23/1584957766_972031.png) - namenode的name文件夹下内容也是正常的 ![图片说明](https://img-ask.csdn.net/upload/202003/23/1584957934_490872.png) - datanode的data文件夹下空白,不应该是显示VERSION等信息吗? ![图片说明](https://img-ask.csdn.net/upload/202003/23/1584958028_101395.png)

hadoop的DataNode节点的问题

Centos7为什么每次启动hadoop时,jps查看DataNode都没有,要删除目录在重建,并重新格式化才有DataNode。文件和hadoop安装包删了重建了并格式化了还是这样。环境,配置什么的都OK的

hadoop节点启动成功,master缺找不到

datanode节点启动成功了,在三个节点虚拟机上jps能看到datanode,但是在master上看不到。 localhost:50070上也能看到节点,各位大神,有知道这是什么问题的么 拜托了 网上找了几个方法,清理tmp之类的文件夹,然后重新format,我也试过了,不好用

hadoop 集群数据节点 存储空间不一样会有问题吗?

该Hadoop集群有8台服务器,3台是24t,5台是32t,这种情况有问题吗?有没有优化的参数配置?

HADOOP datanode同时三节点故障下查询BLOCK块信息

## **有一个关于hadoop dfs高可用的问题:** --- --- 场景: 现在有一个HADOOP集群,节点数假设100个,副本设定为3,在某个时刻同时三台主机宕机。 --- --- 问题: 集群内数据量大的话,一定有1~多个block块的三个副本同时在这三台故障主机上: 1. 如何查询出这1~多个BLOCK块的信息(如ID号等等)?,以及这个BLOCK块所属的文件名? 2. 或者在哪里(主节点监控页面或者hadoop原生shell查询),直接能够查到哪些文件不可用了?

hadoop2.x集群部署一种一个datanode无法启动

Exception in secureMain java.net.UnknownHostException: node1: node1 at java.net.InetAddress.getLocalHost(InetAddress.java:1473) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:187) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:207) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2153) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402) Caused by: java.net.UnknownHostException: node1 at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 6 more 2015-01-16 09:08:54,152 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-01-16 09:08:54,164 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: node1: node1 ************************************************************/ 环境ubuntu,hadoop2.6,jdk7 [排比句](http://www.zaojuzi.com/paibiju/ "")部署三台虚拟机一台namenode,两台datanode;/etc/hostname 都已经配置分布为master,node1,node2 /etc/hosts配置为: 27.0.0.1 localhost 127.0.1.1 ubuntu.localdomain ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.184.129 master 192.168.184.130 node1 192.168.184.131 node2 hadoop/etc/hadoo/slaves配置为[造句](http://www.zaojuzi.com/ ""): node1 node2 core-site.xml配置为: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yangwq/hadoop-2.6.0/temp</value> <description>A base for other temporary directories.</description> </property> </configuration> hdfs-site.xml配置为: <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/data</value> </property> </configuration> mapred-site.xml配置为: <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> </configuration> yarn-site.xml配置为: <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!-- resourcemanager hostname或ip地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> </configuration> 在启动的时候node1节点的datanode一直无法启动,同时通过ssh登录各节点都是正常。

hadoop secondarynamenode未启动求助

本人搭建了hadoop集群,一个namenode ,两个datanode,启动后namenode和resource manger进程都启动了,但secondarynamenode未启动什么原因,两个datanode的datanode和nodemanager进程也都启动了,能正常的上传文件和下载文件,在hdsf-site配置中也配置了secondarynamenode。求助。。。另外stop hadoop的时候,datanode日志中报EOFExceptin ,什么原因?

hadoop集群搭建好后Datenode诡异的再master机器上开启,没有在slave机器上开启

我是一个master,3个slave 问题是这样的: hadoop集群搭建好后在master机器上start-all.sh,结果datanode也在此机器启动没在slaves机器启动 我用的hadoop3.1+jdk1.8,之前照书上搭建的hadoop2.6+openjdk10可以搭建可以 正常启动,namenode等在master上启动,datanode等在slave上启动,现在换了新 版本就不行了,整了一天。。。 目前条件:各个机器能相互ping,也能ssh 都能正常上网 如果一台机器既做master又做slave,可以正常开启50070(当然hadoop3后改成了9870)网页,一切正常 在master上开启start-all.sh时: WARNING: Attempting to start all Apache Hadoop daemons as hduser in 10 seconds. WARNING: This is not a recommended production deployment configuration. WARNING: Use CTRL-C to abort. Starting namenodes on [emaster] Starting datanodes Starting secondary namenodes [emaster] 2018-05-04 22:39:37,858 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting resourcemanager Starting nodemanagers 根本没有去找slave jps后发现: 3233 SecondaryNameNode 3492 ResourceManager 2836 NameNode 3653 NodeManager 3973 Jps 3003 DataNode 全都在master上启动了,slave机器什么也没启动 查看datanode日志,发现它开了3次master的datanode(我的master名字是emaster):(展示部分) STARTUP_MSG: Starting DataNode STARTUP_MSG: host = emaster/192.168.56.100 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.1.0 而且每遍有报错: ``` java.io.EOFException: End of File Exception between local host is: "emaster/192.168.56.100"; destination host is: "emaster":9000; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:789) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495) at org.apache.hadoop.ipc.Client.call(Client.java:1437) at org.apache.hadoop.ipc.Client.call(Client.java:1347) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy20.sendHeartbeat(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:166) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:514) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:645) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:841) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1796) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1165) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1061) 2018-05-04 21:49:43,320 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM ``` 我的猜测,我设置了3台slave,却不知道哪里配置出了问题,使得master不去开启 slave的datanode,却开启自己的datanode,但我实在找不出哪里出错了,什么 masters文件,slaves文件都配了啊,而且各个机器间可以ping通,有大神可以指点下本小白吗,真的万分感谢!!!!

hadoop集群添加新节点的问题

centOS6.5环境下的伪集群,最开始有master,和slave两个虚拟机器,一切运行正常,今天我又克隆了一个slave节点,修改了hostname为slave2之后,修改了hosts文件,和slaves文件后添加进了集群,各个机器的jps显示都正常,可是web查看的时候发现新添加的节点slave2的datanode并没有启动,可是jps显示进程正常,重启无效。这个怎么回事?应该如何解决,求大神指点

hadoop Windows下伪分布模式找不到datanode和tasktracker

hadoop Windows下伪分布模式中jps指令后找不到datanode和tasktracker,只有这几项能看到,而且也执行不了那个查看集群的report指令,求大神拯救啊[图片说明](https://img-ask.csdn.net/upload/201511/25/1448441622_501390.png)

网络节点向hadoop集群上传文件时,内网与外网IP出错

公司需要搭建hadoop环境收集网络上服务器节点的日志。但是hadoop搭建在微软云上,微软云的节点(namenode和datanode)之间用内网IP通信,但是网络商服务器节点上传需要用到外网IP。我本来想把hadoop的Java源码根据需求修改后,重新编译得到新包,再到微软云的虚拟机去配置。但是没法得到NAT的映射表,也就是内网IP没法在代码里映射为外网IP。有什么其他的解决方法吗?

为何master节点会出现在datanode中????

三个节点启动好几个小时了,突然发现一个slave2挂了,但我在slave2上,用JPS看该有的都有。去网上看到说可以用hadoop-daemon.sh start datanode启动datanode。再用master:50070看,发现datanode里面出现了master。。。 什么情况???求解答

hadoop集群,hdfs dfs -ls / 目录出错

搭建了一个hadoop集群,用hdfs dfs -ls /命令,列出的是本地系统的根目录。 用hdfs dfs -ls hdfs://servicename/ 列出的目录才是hdfs上的目录,可能是什么原因? 执行hive创建的目录也是在本地系统目录上。 集群的配置如下 集群规划: 主机名 IP 安装的软件 运行的进程 hadoop01 192.168.175.129 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop02 192.168.175.127 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop03 192.168.175.126 jdk、hadoop ResourceManager hadoop04 192.168.175.125 jdk、hadoop ResourceManager hadoop05 192.168.175.124 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop06 192.168.175.123 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop07 192.168.175.122 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain windows:NLB LINUX:LVS 1.liunx虚拟机安装后,虚拟机连接模式要选择host-only模式。然后分配IP(以hadoop01为例) DEVICE="eth0" BOOTPROTO="static" ### HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c" IPADDR="192.168.175.129" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.175.1" ### 2.修改主机名: vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop01 ### 3.关闭防火墙: #查看防火墙状态 service iptables status #关闭防火墙 service iptables stop #查看防火墙开机启动状态 chkconfig iptables --list #关闭防火墙开机启动 chkconfig iptables off 4.免登录配置: #生成ssh免登陆密钥 #进入到我的home目录 cd ~/.ssh ssh-keygen -t rsa (四个回车) 执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥) 将公钥拷贝到要免登陆的机器上 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 或 若报错ssh-copy-id: ERROR: No identities found,是因为找不到公钥路径,加上-i然后再加上路径即可 则用 $ ssh-copy-id -i ~/.ssh/id_rsa.pub user@remote_ip 5.主机IP映射关系(/etc/hosts每台机器上都要配置全部映射关系) 192.168.175.129 hadoop01 192.168.175.127 hadoop02 192.168.175.126 hadoop03 192.168.175.125 hadoop04 192.168.175.124 hadoop05 192.168.175.123 hadoop06 192.168.175.122 hadoop07 6./etc/profile下配置java环境变量: export JAVA_HOME=/lichangwu/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin #刷新profile source /etc/profile 若版本报错,vi /etc/selinux/config,设置SELINUX=disabled,然后重启虚拟机 7.安装zookeeper: 1.安装配置zooekeeper集群(在hadoop05上): 1.1解压 tar -zxvf zookeeper-3.4.6.tar.gz -C /lichangwu/ 1.2修改配置 cd /lichangwu/zookeeper-3.4.6/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg 修改:dataDir=/lichangwu/zookeeper-3.4.6/tmp 在最后添加: server.1=hadoop05:2888:3888 server.2=hadoop06:2888:3888 server.3=hadoop07:2888:3888 保存退出 然后创建一个tmp文件夹 mkdir /lichangwu/zookeeper-3.4.6/tmp 再创建一个空文件 touch /lichangwu/zookeeper-3.4.6/tmp/myid 最后向该文件写入ID echo 1 > /lichangwu/zookeeper-3.4.6/tmp/myid 1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个lichangwu目录:mkdir /lichangwu) scp -r /lichangwu/zookeeper-3.4.6/ hadoop06:/lichangwu/ scp -r /lichangwu/zookeeper-3.4.6/ hadoop07:/lichangwu/ 注意:修改hadoop06、hadoop07对应/lichangwu/zookeeper-3.4.6/tmp/myid内容 itcast06: echo 2 > /lichangwu/zookeeper-3.4.6/tmp/myid itcast07: echo 3 > /lichangwu/zookeeper-3.4.6/tmp/myid 8.安装配置hadoop集群(在hadoop01上操作): 2.1解压 tar -zxvf hadoop-2.4.1.tar.gz -C /lichangwu/ 2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下) #将hadoop添加到环境变量中 vim /etc/profile export JAVA_HOME=/lichangwu/jdk1.7.0_79 export HADOOP_HOME=/lichangwu/hadoop-2.4.1 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下 cd /lichangwu/hadoop-2.4.1/etc/hadoop 2.2.1修改hadoo-env.sh export JAVA_HOME=/lichangwu/jdk1.7.0_79 2.2.2修改core-site.xml <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/lichangwu/hadoop-2.4.1/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> </configuration> 2.2.3修改hdfs-site.xml <configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop01:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop01:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop02:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop02:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/lichangwu/hadoop-2.4.1/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration> 2.2.4修改mapred-site.xml <configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 2.2.5修改yarn-site.xml <configuration> <!-- 开启RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop04</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 2.2.6修改slaves(slaves是指定子节点的位置,因为要在itcast01上启动HDFS、在itcast03启动yarn, 所以itcast01上的slaves文件指定的是datanode的位置,itcast03上的slaves文件指定的是nodemanager的位置) hadoop05 hadoop06 hadoop07 2.2.7配置免密码登陆 #首先要配置itcast01到hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop01上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点,包括自己 ssh-coyp-id hadoop01 ssh-coyp-id hadoop02 ssh-coyp-id hadoop03 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop03上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop02到hadoop01的免登陆 在hadoop02上生产一对钥匙 ssh-keygen -t rsa ssh-coyp-id -i hadoop01 2.4将配置好的hadoop拷贝到其他节点 scp -r hadoop-2.4.1/ hadoop02:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop03:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop04:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop05:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop06:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop07:/lichangwu/hadoop-2.4.1/ ###注意:严格按照下面的步骤 2.5启动zookeeper集群(分别在hadoop05、hadoop06、hadoop07上启动zk) cd /lichangwu/zookeeper-3.4.6/bin/ ./zkServer.sh start #查看状态:一个leader,两个follower ./zkServer.sh status 2.6启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行) cd /lichangwu/hadoop-2.4.1 sbin/hadoop-daemon.sh start journalnode #运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程 2.7格式化HDFS #在hadoop01上执行命令: hdfs namenode -format #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件, 这里我配置的是/lichangwu/hadoop-2.4.1/tmp,然后将/lichangwu/hadoop-2.4.1/tmp拷贝到hadoop02的/lichangwu/hadoop-2.4.1/下。 scp -r tmp/ hadoop02:/lichangwu/hadoop-2.4.1/ 2.8格式化ZK(在hadoop01上执行即可) hdfs zkfc -formatZK 2.9启动HDFS(在hadoop01上执行) sbin/start-dfs.sh 2.10启动YARN(#####注意#####:是在hadoop03上执行start-yarn.sh, 如果hadoop04上没有启动成功,则在hadoop04上再启动一次start-yarn.sh; 把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动) sbin/start-yarn.sh 到此,hadoop-2.4.1配置完毕,可以统计浏览器访问: http://192.168.175.129:50070 NameNode 'hadoop01:9000' (active) http://192.168.175.127:50070 NameNode 'hadoop02:9000' (standby)

Hadoop集群执行wordcount出现的一些报错信息

我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!

小白求助 :hadoop集群slaves上的NodeManager进程自动断

s100 master s101 slaves s102 slaves s103 slaves namenode -format 后start-all.sh 可以在slaves 看到 NodeManager 进程 但是在webui上 通过http://S100:50070看不到datenode节点信息 查看节点slaves上的 yarn-root-nodemanager-S101.log日志发现有报错信息 ``` 2019-06-01 02:20:02,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:03,596 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:04,603 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:05,609 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:06,611 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:07,616 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,624 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Unexpected error starting NodeStatusUpdater java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,632 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,638 INFO org.apache.hadoop.service.AbstractService: Service NodeManager failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,716 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042 2019-06-01 02:20:08,727 INFO org.apache.hadoop.ipc.Server: Stopping server on 39863 2019-06-01 02:20:08,742 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 39863 2019-06-01 02:20:08,748 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,749 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is interrupted. Exiting. 2019-06-01 02:20:08,790 INFO org.apache.hadoop.ipc.Server: Stopping server on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,836 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Public cache exiting 2019-06-01 02:20:08,840 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system... 2019-06-01 02:20:08,842 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped. 2019-06-01 02:20:08,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete. 2019-06-01 02:20:08,843 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,857 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NodeManager at S101/127.0.0.1 ************************************************************/ ``` 哪位大佬帮忙看看,小弟先谢过 如下是slaves 的配置 yarn-site.xml ``` <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>S100</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/disk1/nm-local-dir,/disk2/nm-local-dir</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>16384</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>16</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>S100:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>S100:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>S100:8031</value> </property> </configuration> ``` core-site.xml ``` <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://S100:8020/</value> </property> </configuration> ```

关于hadoop启动的一个问题

启动所有服务之后http://localhost:50030 (MapReduce的Web页面)可以访问,但是 http://localhost:50070 (HDfS的web页面)访问出错

学Python后到底能干什么?网友:我太难了

感觉全世界营销文都在推Python,但是找不到工作的话,又有哪个机构会站出来给我推荐工作? 笔者冷静分析多方数据,想跟大家说:关于超越老牌霸主Java,过去几年间Python一直都被寄予厚望。但是事实是虽然上升趋势,但是国内环境下,一时间是无法马上就超越Java的,也可以换句话说:超越Java只是时间问题罢。 太嚣张了会Python的人!找工作拿高薪这么简单? https://edu....

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

Java校招入职华为,半年后我跑路了

何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

@程序员:GitHub这个项目快薅羊毛

今天下午在朋友圈看到很多人都在发github的羊毛,一时没明白是怎么回事。 后来上百度搜索了一下,原来真有这回事,毕竟资源主义的羊毛不少啊,1000刀刷爆了朋友圈!不知道你们的朋友圈有没有看到类似的消息。 这到底是啥情况? 微软开发者平台GitHub 的一个区块链项目 Handshake ,搞了一个招募新会员的活动,面向GitHub 上前 25万名开发者派送 4,246.99 HNS币,大约价...

用python打开电脑摄像头,并把图像传回qq邮箱【Pyinstaller打包】

前言: 如何悄悄的打开朋友的摄像头,看看她最近过的怎么样,嘿嘿!这次让我带你们来实现这个功能。 注: 这个程序仅限在朋友之间开玩笑,别去搞什么违法的事情哦。 代码 发送邮件 使用python内置的email模块即可完成。导入相应的代码封装为一个send函数,顺便导入需要导入的包 注: 下面的代码有三处要修改的地方,两处写的qq邮箱地址,还有一处写的qq邮箱授权码,不知道qq邮箱授权码的可以去百度一...

做了5年运维,靠着这份监控知识体系,我从3K变成了40K

从来没讲过运维,因为我觉得运维这种东西不需要太多的知识面,然后我一个做了运维朋友告诉我大错特错,他就是从3K的运维一步步到40K的,甚至笑着说:我现在感觉自己什么都能做。 既然讲,就讲最重要的吧。 监控是整个运维乃至整个产品生命周期中最重要的一环,事前及时预警发现故障,事后提供详实的数据用于追查定位问题。目前业界有很多不错的开源产品可供选择。选择一款开源的监控系统,是一个省时省力、效率最高的方...

计算机网络——浅析网络层

一、前言 注意,关于ipv4和ipv6,ipv4是ip协议第4版本,也表示这个版本的ip一共4个字节,同样地,ipv6是ip协议第6版本,也表示这个版本的ip一共6个字节。 关于网络层使用路由器实现互联:在计算机网络的分层结构中,不同层有不同的中继设备: 计算机网络层 中继设备/中继系统 物理层 中继器、集线器Hub 数据链路层 网桥或交换机(交换机是多端口网桥,两者本质上是一个东西) 网络层 路...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

华为初面+综合面试(Java技术面)附上面试题

华为面试整体流程大致分为笔试,性格测试,面试,综合面试,回学校等结果。笔试来说,华为的难度较中等,选择题难度和网易腾讯差不多。最后的代码题,相比下来就简单很多,一共3道题目,前2题很容易就AC,题目已经记不太清楚,不过难度确实不大。最后一题最后提交的代码过了75%的样例,一直没有发现剩下的25%可能存在什么坑。 笔试部分太久远,我就不怎么回忆了。直接将面试。 面试 如果说腾讯的面试是挥金如土...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

win10暴力查看wifi密码

刚才邻居打了个电话说:喂小灰,你家wifi的密码是多少,我怎么连不上了。 我。。。 我也忘了哎,就找到了一个好办法,分享给大家: 第一种情况:已经连接上的wifi,怎么知道密码? 打开:控制面板\网络和 Internet\网络连接 然后右击wifi连接的无线网卡,选择状态 然后像下图一样: 第二种情况:前提是我不知道啊,但是我以前知道密码。 此时可以利用dos命令了 1、利用netsh wlan...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

记一次腾讯面试,我挂在了最熟悉不过的队列上……

腾讯后台面试,面试官问:如何自己实现队列?

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

HTTP与HTTPS的区别

面试官问HTTP与HTTPS的区别,我这样回答让他竖起大拇指!

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

立即提问
相关内容推荐