Error: HADOOP_HOME is not set correctly 这个错误怎么处理 15C

启动hadoop的时候,出现了这个错误,怎么办啊,不是java home 是hadoop home
网上查的时候都是java home 求解答~!!!
Error: HADOOP_HOME is not set correctly

Please set your HADOOP_HOME variable to the absolute path of |
| the directory that contains hadoop-core-VERSION.jar

3个回答

看你的环境变量中HADOOP_HOME 设置的对不对
这里的目录是你的安装目录啊

不是java 是hadoop启动的时候 不是报的java启动的时候 也没有eclipse

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hadoop运行start-dfs.sh时,报master: ERROR: JAVA_HOME is not set and could not be found.
我已经在hadoop _ env.sh中设置了JAVA _ HOME的绝对路径,还是会报错 ``` ### # Generic settings for HADOOP ### # Technically, the only required environment variable is JAVA_HOME. # All others are optional. However, the defaults are probably not # preferred. Many sites configure these options outside of Hadoop, # such as in /etc/profile.d # The java implementation to use. By default, this environment # variable is REQUIRED on ALL platforms except OS X! export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_211 ``` 我的虚拟机VMware Workstation14.0.0 linux是ubuntu 12.04 desktop amd64 jdk是jdk-1.8.0_211 hadoop是 3.1.2
linux访问windows共享文件夹protocol negotiation failed: NT_STATUS_CONNECTION_RESET
centos6.5 linux访问windows共享文件夹,现在扫描列表报错: protocol negotiation failed: NT_STATUS_CONNECTION_RESET 其他windows可以正常访问,请教是哪里设置问题? [root@hadoop samba]# smbclient -L //192.168.1.102 -U zhang Enter zhang's password: protocol negotiation failed: NT_STATUS_CONNECTION_RESET ``` ```
Hadoop伪分布模式配置完成开启进程的时候显示JAVA_HOME is not set
![图片说明](https://img-ask.csdn.net/upload/201708/14/1502641191_462166.png) 我的hadoop-evn.sh文件夹打开就是这样,最新版的Hadoop,我在=后面直接加上了 我的JAVA_PATH,可是保存之后运行start-all.sh,还是不行,请大神指教该如何解决! 还有我用的是PuTTY,运行start-all.sh时还有报错,在centos的终端上输入start-all.sh 直接没有这个命令????
windows下启动hadoop提示 JAVA_HOME is incorrectly set.
![图片说明](https://img-ask.csdn.net/upload/201507/18/1437229276_159213.jpg) 请注意是windows。而JAVA_HOME毫无疑问是配好过的,我在CMD里敲java是没有问题的
Win10上启动Hadoop报 Error: Could not find or load main class PC
我在Windows10上安装了Hadoop 2.9.2 在hadoop-env.cmd中修改了 set JAVA_HOME=C://PROGRA~1/java/jdk1.8.0_191 set HADOOP_PREFIX=E://hadoop-2.9.2 执行Hadoop命令的时候出现奇怪的情况 情况1: 直接执行hadoop命令: ``` E:\hadoop-2.9.2\bin>hadoop Usage: hadoop [--config confdir] [--loglevel loglevel] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file note: please use "yarn jar" to launch YARN applications, not this command. checknative [-a|-h] check native hadoop and compression libraries availability distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive classpath prints the class path needed to get the Hadoop jar and the required libraries credential interact with credential providers key manage keys via the KeyProvider daemonlog get/set the log level for each daemon or CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters. ``` 可以正常执行命令 情况二: 执行hadoop classpath ``` E:\hadoop-2.9.2\bin>hadoop classpath E:\hadoop-2.9.2\etc\hadoop;E:\hadoop-2.9.2\share\hadoop\common\lib\*;E:\hadoop-2.9.2\share\hadoop\common\*;E:\hadoop-2.9.2\share\hadoop\hdfs;E:\hadoop-2.9.2\share\hadoop\hdfs\lib\*;E:\hadoop-2.9.2\share\hadoop\hdfs\*;E:\hadoop-2.9.2\share\hadoop\yarn;E:\hadoop-2.9.2\share\hadoop\yarn\lib\*;E:\hadoop-2.9.2\share\hadoop\yarn\*;E:\hadoop-2.9.2\share\hadoop\mapreduce\lib\*;E:\hadoop-2.9.2\share\hadoop\mapreduce\* ``` 可以正常显示所有的classpath 情况三: **执行hadoop version --出错了** ``` E:\hadoop-2.9.2\bin>hadoop version Error: Could not find or load main class PC ``` 这个就出错了 我查了一些资料,说是classpath不正确,但是我看classpath的路径确实是我安装hadoop的路径,请问各位大神有没有碰到过类似情况的?麻烦分享一下,谢谢。
SparkStreaming程序报错,yarn模式,求解答!
WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1543366370005_9010_01_000005 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1543366370005_9010_01_000011 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node ERROR YarnScheduler: Lost executor 4 on hadoop009: Container marked as failed: container_1543366370005_9010_01_000005 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node ERROR YarnScheduler: Lost executor 10 on hadoop009: Container marked as failed: container_1543366370005_9010_01_000011 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node ERROR YarnScheduler: Lost an executor 10 (already removed): Pending loss reason. ERROR YarnScheduler: Lost an executor 4 (already removed): Pending loss reason.
linux source配置文件不生效
问题背景: 在搭建hadoop-2.6.0时,启动时发现安装包里面少了bin目录,所以就改装hadoop-2.5.1之后开始修改配置文件。 并且全程就只修改~/.bash_profile这个配置文件。 详情: 配置hadoop-2.6.0时的配置文件: export PATH export JAVA_HOME=/usr/java/jdk1.8.0_111 export PATH=$PATH:$JAVA_HOME/bin export ZOOKEEPER_HOME=/usr/bigData/soft/zookeeper-3.4.9 export PATH=$PATH:$ZOOKEEPER_HOME/bin export HADOOP_HOME=/usr/bigData/soft/hadoop-2.6.0 export PATH=$PATH:$HADOOP_HOME/sbin export PATH=$PATH:$HADOOP_HOME/bin 配置hadoop-2.5.1时的配置文件: export PATH export JAVA_HOME=/usr/java/jdk1.8.0_111 export PATH=$PATH:$JAVA_HOME/bin export ZOOKEEPER_HOME=/usr/bigData/soft/zookeeper-3.4.9 export PATH=$PATH:$ZOOKEEPER_HOME/bin export HADOOP_HOME=/usr/bigData/soft/hadoop-2.5.1 export PATH=$PATH:$HADOOP_HOME/sbin export PATH=$PATH:$HADOOP_HOME/bin 问题: 在将配置文件里的Hadoop版本变成2.5.1之后,source ~/.bash_profie 然而在启动hadoop-daemon.sh start journalnode的时候一直报haddoop-2.6下面缺少bin目录.一直没有引用到2.5下的bin目录。(2.5软件包因为之前安装过没有什么问题,推断是source 没有生效)求大神告知这是什么问题,以及如何修改。
eclipse开发hadoop,引入的hadoop_home应该是什么版本的
小白提问:我在eclipse中使用了plugin插件,集群正常location进来, 请问那个引用的hadoop目录是应该什么版本的,hadoop1.2还是2.*,是源码包的那个吗?求下载地址。 集群:hadoop2.5cdh5.3.0 hbase0.98 ![图片说明](https://img-ask.csdn.net/upload/201510/12/1444662716_661647.jpg) ![这个应该放什么文件?](https://img-ask.csdn.net/upload/201510/12/1444662736_157271.jpg)
利用sqoop把数据从Oracle导出到hive报错
![图片说明](https://img-ask.csdn.net/upload/201504/16/1429180711_592161.png) bash-4.1$ sqoop import --connect jdbc:oracle:thin:@192.168.1.169:1521:orcl --username HADOOP --password hadoop2015 --table CALC_UPAY_DATE_HADOOP_HDFS --split-by UPAYID --hive-import Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. find: paths must precede expression: ant-eclipse-1.0-jvm1.2.jar Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] 15/04/16 03:28:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.2 15/04/16 03:28:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 15/04/16 03:28:13 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 15/04/16 03:28:13 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 15/04/16 03:28:13 INFO manager.SqlManager: Using default fetchSize of 1000 15/04/16 03:28:13 INFO tool.CodeGenTool: Beginning code generation 15/04/16 03:28:13 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CALC_UPAY_DATE_HADOOP_HDFS t WHERE 1=0 15/04/16 03:28:14 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 15/04/16 03:28:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/e9286bf0e7d796ba396d3155210012b0/CALC_UPAY_DATE_HADOOP_HDFS.jar 15/04/16 03:28:15 INFO mapreduce.ImportJobBase: Beginning import of CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:15 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 15/04/16 03:28:15 INFO manager.OracleManager: Time zone has been set to GMT 15/04/16 03:28:16 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 15/04/16 03:28:16 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.1.201:8032 15/04/16 03:28:18 INFO db.DBInputFormat: Using read commited transaction isolation 15/04/16 03:28:18 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(UPAYID), MAX(UPAYID) FROM CALC_UPAY_DATE_HADOOP_HDFS 15/04/16 03:28:19 INFO mapreduce.JobSubmitter: number of splits:4 15/04/16 03:28:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429145594985_0020 15/04/16 03:28:20 INFO impl.YarnClientImpl: Submitted application application_1429145594985_0020 15/04/16 03:28:20 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1429145594985_0020/ 15/04/16 03:28:20 INFO mapreduce.Job: Running job: job_1429145594985_0020 15/04/16 03:28:31 INFO mapreduce.Job: Job job_1429145594985_0020 running in uber mode : false 15/04/16 03:28:31 INFO mapreduce.Job: map 0% reduce 0% 15/04/16 03:28:59 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000000_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:00 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000002_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 15/04/16 03:29:01 INFO mapreduce.Job: Task Id : attempt_1429145594985_0020_m_000001_0, Status : FAILED Error: oracle.jdbc.driver.T4CPreparedStatement.isClosed()Z 我用sqoop把数据从hive导出到oracle一切正常
azkaban3安装运行报错
版本为3.12,参考网上安装教程安装的后,运行信息 [root@bqdps4 azkaban-web-server-3.12.0]# sh bin/azkaban-web-start.sh Error: HADOOP____HOE is not set. Hadoop job types will not run properly. bin/.. :bin/../lib/activation-1.1.jar......:bin/../extlib/mysql-connector-java-5.1.28.jar:bin/../plugins/*/*.jar [root@bqdps4 azkaban-web-server-3.12.0]# Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread "main" azkaban.proerties 的配置和网上的安装教程一样 database.type=mysql mysql.port=3306 mysql.host=localhost mysql.database=azkaban mysql.user=azkaban mysql.password=azkaban mysql.numconnections=100 # Velocity dev mode velocity.dev.mode=false # Azkaban Jetty server properties. jetty.maxThreads=25 jetty.ssl.port=8443 # jetty.use.ssl=false jetty.port=8081 jetty.keystore=keystore jetty.password=password jetty.keypassword=keypassword jetty.truststore=keystore jetty.trustpassword=password jetty.excludeCipherSuites= 有部署过的大哥也可以指导我怎么安装,可能是某个环节出问题了,跪谢了
habase 报错 ERROR: Can't get master address from ZooKeeper; znode data == null
hadoop + zookeeper +hbase 环境 hadoop 和 zookeeper 集群环境都ok,hbase启动之后,查看hbase状态报错 ![图片说明](https://img-ask.csdn.net/upload/201903/07/1551957383_266120.jpg) 网上的各种重启hbase 重启服务 修改配置文件都试过了,解决不了,跪求会的大神指导一下。 /etc/profile配置 ``` export JAVA_HOME=/opt/java/jdk1.8.0_201 export HADOOP_HOME=/opt/hadoop/hadoop-2.8.0 export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib" #export HIVE_HOME=/opt/hive/apache-hive-2.1.1-bin #export HIVE_CONF_DIR=${HIVE_HOME}/conf #export SQOOP_HOME=/opt/sqoop/sqoop-1.4.6.bin__hadoop-2.0.4-alpha export HBASE_HOME=/opt/hbase/hbase-1.4.9 export ZK_HOME=/opt/zookeeper/zookeeper-3.4.13 export CLASS_PATH=.:${JAVA_HOME}/lib:${HIVE_HOME}/lib:$CLASS_PATH export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${HIVE_HOME}/bin:${SQOOP_HOME}/bin:${HBASE_HOME}:${ZK_HOME}/bin:$PATH ``` hbase/conf/hbase.xml 配置 ``` <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://hadoop1:9000/hbase</value> <description>The directory shared byregion servers.</description> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/opt/hbase/zk_data</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> <description>Property from ZooKeeper'sconfig zoo.cfg. The port at which the clients will connect. </description> </property> <property> <name>zookeeper.session.timeout</name> <value>120000</value> </property> <property> <name>hbase.zookeeper.property.tickTime</name> <value>6000</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>hadoop1,hadoop2,hadoop3</value> </property> <property> <name>hbase.tmp.dir</name> <value>/root/hbase/tmp</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> ``` hbase/conf/conf/hbase-env.sh文件 ``` export HBASE_OPTS="-XX:+UseConcMarkSweepGC" # Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+ #export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m" #export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m" export JAVA_HOME=/opt/java/jdk1.8.0_201 export HADOOP_HOME=/opt/hadoop/hadoop-2.8.0 export HBASE_HOME=/opt/hbase/hbase-1.4.9 export HBASE_CLASSPATH=/opt/hadoop/hadoop-2.8.0/etc/hadoop export HBASE_PID_DIR=/root/hbase/pids export HBASE_MANAGES_ZK=false ``` zookeeper/zoo.cfg 配置文件 ``` tickTime=2000 initLimit=10 syncLimit=5 clientPort=2181 dataDir=/opt/zookeeper/data dataLogDir=/opt/zookeeper/dataLog server.1=hadoop1:2886:3881 server.2=hadoop2:2887:3882 server.3=hadoop3:2888:3883 quorumListenOnAllIPs=true ``` /opt/hadoop/hadoop-2.8.0/etc/hadoop/core-site.xml 文件 ``` <configuration> <property>         <name>hadoop.tmp.dir</name>         <value>/root/hadoop/tmp</value>         <description>Abase for other temporary directories.</description>    </property>    <property>         <name>fs.default.name</name>         <value>hdfs://hadoop1:9000</value>    </property> </configuration> ```
搭建过HDP集群的小伙伴看过来 ,在线等 挺急的
在最后安装测试的时候报错 stderr: raise ExecutionFailed(err_msg, code, out, err) * * resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/**yum -d 0 -e 0 -y install accumulo_2_6_0_3_8' returned 1. Error: Package: hadoop_2_6_0_3_8-hdfs-2.7.3.2.6.0.3-8.x86_64 (HDP-2.4.2.0)** Requires: libtirpc-devel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest stdout: 2017-07-28 14:20:39,167 - Execution of '/usr/bin/yum -d 0 -e 0 -y install accumulo_2_6_0_3_8' returned 1. Error: Package: hadoop_2_6_0_3_8-hdfs-2.7.3.2.6.0.3-8.x86_64 (HDP-2.4.2.0) Requires: libtirpc-devel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest 2017-07-28 14:20:39,168 - Failed to install package accumulo_2_6_0_3_8. Executing '/usr/bin/yum clean metadata' 2017-07-28 14:20:39,470 - Retrying to install package accumulo_2_6_0_3_8 after 30 seconds Command failed after 1 tries
linux下 命令问题 tab键失效
[hadoop@Hadoop1 ~]$ cd hadbash: !ref: 为绑定变量 bash: !ref: 为绑定变量 bash: words[i]: 为绑定变量 [hadoop@Hadoop1 ~]$ source /etc/profile bash: HISTCONTROL: 为绑定变量 bash: XTERM_VERSION: 为绑定变量 bash: local256: 为绑定变量 bash: USER_LS_COLORS: 为绑定变量 bash: KSH_VERSION: 为绑定变量 bash: ZSH_VERSION: 为绑定变量 bash: ZSH_VERSION: 为绑定变量 [hadoop@Hadoop1 ~]$ vim /etc/profile 输入命令时,tab键不能自动补全
[Hadoop3.0.0]Could not find YarnChild
Could not find or load main class org.apache.hadoop.mapred.YarnChild 在使用hadoop3.0.0中,所有东西都配完,也启动起来了。 但运行hadoop-mapreduce-examples-3.0.0.jar的时候开始报错(运行自己的程序也是) ![报错信息](https://img-ask.csdn.net/upload/201802/06/1517908041_839845.png) 下面是我的配置信息 <!-- core-site.xml --> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoopdata</value> </property> <!-- hdfs-site.xml --> <property> <name>dfs.replication</name> <value>3</value> </property> <!-- yarn-site.xml --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <!-- mapred-site.xml --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME</value> </property> export HADOOP_HOME=/data/hadoop-3.0.0 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH 没有配置HADOOP_CLASSPATH。 百度、Google、stackover挺久都找不到答案,希望大神们能解答~
关于hadoop_hive的问题
本人接触hadoop与hive不长时间 因为是接手别人的工作所以有点吃力。最近遇到一个问题,不知道应该怎样解决,希望有大神可以解答一下。 问题是这样的:我执行一个脚本从hive里面读取数据然后写入一个csv文件里,hiveQL语句其实也就是从一个表中读取数据,加上一些字段得时间段参数之类的。 但是在跑的时候时不时会出现问题 日志如下: Task with the most failures(4): ----- Task ID: task_1513574350768_3535_m_000655 URL: http://hadoopnode102:8088/taskdetails.jsp?jobid=job_1513574350768_3535&tipid=task_1513574350768_3535_m_000655 ----- Diagnostic Messages for this Task: Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (数据在这里就先不铺出来了) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:185) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (数据在这里就先不铺出来了) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:503) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:176) ... 8 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.net.SocketTimeoutException: 75000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/132.96.186.7:58295 remote=/132.96.186.9:50010] at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:723) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at org.apache.hadoop.hive.ql.exec.FilterOperator.processOp(FilterOperator.java:120) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493) ... 9 more Caused by: java.net.SocketTimeoutException: 75000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/132.96.186.7:58295 remote=/132.96.186.9:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2201) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1439) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 但是跑多几次就可以跑出来, 所以我觉得应该不是数据问题,是配置上没有做好 而且这个查询只有map也没有reduce,不知道该如何排查。 希望有大神知道如何解决指导指导。谢谢~
DataNode未报错直接关闭
启动得好好的,过一段时间就掉了。而且没有任何异常信息,这是为什么呢? 这是后面的一些日志,完全没有抛出异常。 ``` 2019-11-02 16:13:13,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 to 172.31.19.252:50010 2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 (numBytes=109043) to /172.31.19.252:50010 2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742042_1218 (numBytes=197986) to /172.31.19.252:50010 2019-11-02 16:13:16,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010 2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010 2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010 of size 58160 2019-11-02 16:13:16,985 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010 of size 2178774 2019-11-02 16:13:16,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 to 172.31.19.252:50010 2019-11-02 16:13:16,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 to 172.31.19.252:50010 2019-11-02 16:13:17,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 (numBytes=34604) to /172.31.19.252:50010 2019-11-02 16:13:17,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 (numBytes=780664) to /172.31.19.252:50010 2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 to 172.31.19.252:50010 2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 to 172.31.19.252:50010 2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 (numBytes=6052) to /172.31.19.252:50010 2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 (numBytes=592319) to /172.31.19.252:50010 2019-11-02 16:13:44,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 src: /172.31.20.57:51732 dest: /172.31.23.3:50010 2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51732, dest: /172.31.23.3:50010, bytes: 1108073, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, duration(ns): 9331035 2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,223 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 src: /172.31.20.57:51736 dest: /172.31.23.3:50010 2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51736, dest: /172.31.23.3:50010, bytes: 20744, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, duration(ns): 822959 2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,240 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 src: /172.31.20.57:51738 dest: /172.31.23.3:50010 2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51738, dest: /172.31.23.3:50010, bytes: 53464, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, duration(ns): 834208 2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,250 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 src: /172.31.20.57:51740 dest: /172.31.23.3:50010 2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51740, dest: /172.31.23.3:50010, bytes: 60686, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, duration(ns): 836219 2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,139 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 src: /172.31.20.57:51748 dest: /172.31.23.3:50010 2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51748, dest: /172.31.23.3:50010, bytes: 914311, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, duration(ns): 7451340 2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,179 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 src: /172.31.20.57:51752 dest: /172.31.23.3:50010 2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51752, dest: /172.31.23.3:50010, bytes: 706710, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, duration(ns): 2666689 2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,192 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 src: /172.31.20.57:51754 dest: /172.31.23.3:50010 2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51754, dest: /172.31.23.3:50010, bytes: 186260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, duration(ns): 1335836 2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,617 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 src: /172.31.20.57:51756 dest: /172.31.23.3:50010 2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51756, dest: /172.31.23.3:50010, bytes: 1768012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, duration(ns): 8602898 2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:46,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010 2019-11-02 16:13:46,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010 of size 205389 2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 to 172.31.19.252:50010 2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 to 172.31.19.252:50010 2019-11-02 16:13:47,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 (numBytes=20744) to /172.31.19.252:50010 2019-11-02 16:13:47,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 (numBytes=1108073) to /172.31.19.252:50010 2019-11-02 16:13:47,315 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249 src: /172.31.20.57:51766 dest: /172.31.23.3:50010 2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51766, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, duration(ns): 3408777 2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 src: /172.31.20.57:51768 dest: /172.31.23.3:50010 2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51768, dest: /172.31.23.3:50010, bytes: 36519, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, duration(ns): 1284246 2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 src: /172.31.20.57:51776 dest: /172.31.23.3:50010 2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51776, dest: /172.31.23.3:50010, bytes: 279012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, duration(ns): 2573122 2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 src: /172.31.20.57:51778 dest: /172.31.23.3:50010 2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51778, dest: /172.31.23.3:50010, bytes: 1344870, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, duration(ns): 3770082 2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:48,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 src: /172.31.20.57:51780 dest: /172.31.23.3:50010 2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51780, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, duration(ns): 2365213 2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:48,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257 src: /172.31.20.57:51782 dest: /172.31.23.3:50010 2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51782, dest: /172.31.23.3:50010, bytes: 99555, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, duration(ns): 1140563 2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,062 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259 src: /172.31.20.57:51786 dest: /172.31.23.3:50010 2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51786, dest: /172.31.23.3:50010, bytes: 20998, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, duration(ns): 823110 2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262 src: /172.31.20.57:51792 dest: /172.31.23.3:50010 2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51792, dest: /172.31.23.3:50010, bytes: 224277, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, duration(ns): 1129868 2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,511 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263 src: /172.31.20.57:51794 dest: /172.31.23.3:50010 2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51794, dest: /172.31.23.3:50010, bytes: 780664, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, duration(ns): 2377601 2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010 2019-11-02 16:13:49,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010 2019-11-02 16:13:49,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010 of size 232248 2019-11-02 16:13:49,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010 of size 434678 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 to 172.31.19.252:50010 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 to 172.31.19.252:50010 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073 for deletion 2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 (numBytes=53464) to /172.31.19.252:50010 2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073 2019-11-02 16:13:50,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 (numBytes=60686) to /172.31.19.252:50010 2019-11-02 16:13:51,310 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269 src: /172.31.19.252:46180 dest: /172.31.23.3:50010 2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.19.252:46180, dest: /172.31.23.3:50010, bytes: 94, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, duration(ns): 2826729 2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:52,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010 2019-11-02 16:13:52,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010 2019-11-02 16:13:52,986 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010 of size 1033299 2019-11-02 16:13:52,987 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010 of size 892808 2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 to 172.31.19.252:50010 2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 to 172.31.19.252:50010 2019-11-02 16:13:53,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 (numBytes=914311) to /172.31.19.252:50010 2019-11-02 16:13:53,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 (numBytes=706710) to /172.31.19.252:50010 2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 to 172.31.19.252:50010 2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 to 172.31.19.252:50010 2019-11-02 16:13:56,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 (numBytes=186260) to /172.31.19.252:50010 2019-11-02 16:13:56,025 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010 2019-11-02 16:13:56,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010 2019-11-02 16:13:56,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010 of size 36455 2019-11-02 16:13:56,040 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010 of size 1801469 2019-11-02 16:13:56,068 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 (numBytes=1768012) to /172.31.19.252:50010 2019-11-02 16:13:58,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010 2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010 2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010 of size 19827 2019-11-02 16:13:58,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 to 172.31.19.252:50010 2019-11-02 16:13:59,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 (numBytes=36519) to /172.31.19.252:50010 2019-11-02 16:13:59,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010 of size 267634 2019-11-02 16:14:01,833 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273 src: /172.31.23.3:50512 dest: /172.31.23.3:50010 2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.23.3:50512, dest: /172.31.23.3:50010, bytes: 1029, op: HDFS_WRITE, cliID: DFSClient_attempt_1572710114754_0009_m_000000_0_-2142389405_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, duration(ns): 3798130 2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, type=LAST_IN_PIPELINE terminating 2019-11-02 16:14:01,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010 2019-11-02 16:14:01,994 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010 of size 375618 2019-11-02 16:14:01,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 to 172.31.19.252:50010 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010 of size 1765905 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 to 172.31.19.252:50010 2019-11-02 16:14:02,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 (numBytes=279012) to /172.31.19.252:50010 2019-11-02 16:14:02,008 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 (numBytes=1344870) to /172.31.19.252:50010 2019-11-02 16:14:04,999 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 because on-disk length 990927 is shorter than NameNode recorded length 9223372036854775807 2019-11-02 16:14:08,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 to 172.31.19.252:50010 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 to 172.31.19.252:50010 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079 for deletion 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 (numBytes=375618) to /172.31.19.252:50010 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 (numBytes=36455) to /172.31.19.252:50010 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079 2019-11-02 16:14:11,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 to 172.31.19.252:50010 2019-11-02 16:14:11,007 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 (numBytes=25496) to /172.31.19.252:50010 2019-11-02 17:01:35,904 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-793432708-172.31.20.57-1572709584342 Total blocks: 88, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0 2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x15733bb21ccd9a44, containing 1 storage report(s), of which we sent 1. The reports had 88 total blocks and used 1 RPC(s). This took 1 msec to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-793432708-172.31.20.57-1572709584342 ```
小白求助 :hadoop集群slaves上的NodeManager进程自动断
s100 master s101 slaves s102 slaves s103 slaves namenode -format 后start-all.sh 可以在slaves 看到 NodeManager 进程 但是在webui上 通过http://S100:50070看不到datenode节点信息 查看节点slaves上的 yarn-root-nodemanager-S101.log日志发现有报错信息 ``` 2019-06-01 02:20:02,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:03,596 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:04,603 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:05,609 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:06,611 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:07,616 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,624 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Unexpected error starting NodeStatusUpdater java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,632 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,638 INFO org.apache.hadoop.service.AbstractService: Service NodeManager failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,716 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042 2019-06-01 02:20:08,727 INFO org.apache.hadoop.ipc.Server: Stopping server on 39863 2019-06-01 02:20:08,742 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 39863 2019-06-01 02:20:08,748 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,749 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is interrupted. Exiting. 2019-06-01 02:20:08,790 INFO org.apache.hadoop.ipc.Server: Stopping server on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,836 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Public cache exiting 2019-06-01 02:20:08,840 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system... 2019-06-01 02:20:08,842 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped. 2019-06-01 02:20:08,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete. 2019-06-01 02:20:08,843 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,857 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NodeManager at S101/127.0.0.1 ************************************************************/ ``` 哪位大佬帮忙看看,小弟先谢过 如下是slaves 的配置 yarn-site.xml ``` <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>S100</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/disk1/nm-local-dir,/disk2/nm-local-dir</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>16384</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>16</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>S100:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>S100:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>S100:8031</value> </property> </configuration> ``` core-site.xml ``` <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://S100:8020/</value> </property> </configuration> ```
hadoop字啊运行程序出现错误,求大神指点,谢谢了
[hadoop@Master hadoop]$ bin/hadoop jar wikipedia-miner-hadoop.jar org.wikipedia.miner.extraction.DumpExtractor input/enwiki-20130503-pages-articles.xml input/languages.xml en input/en-sent.bin output 13/11/01 15:20:37 INFO extraction.DumpExtractor: Extracting site info 13/11/01 15:20:37 INFO extraction.DumpExtractor: Starting page step 13/11/01 15:20:37 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 13/11/01 15:20:37 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 13/11/01 15:20:37 INFO mapred.FileInputFormat: Total input paths to process : 1 13/11/01 15:20:38 INFO mapred.JobClient: Running job: job_201311011519_0001 13/11/01 15:20:39 INFO mapred.JobClient: map 0% reduce 0% 13/11/01 15:20:48 INFO mapred.JobClient: Task Id : attempt_201311011519_0001_m_000654_0, Status : FAILED java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) 13/11/01 15:20:48 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000654_0&filter=stdout 13/11/01 15:20:48 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000654_0&filter=stderr 13/11/01 15:20:54 INFO mapred.JobClient: Task Id : attempt_201311011519_0001_m_000654_1, Status : FAILED java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) 13/11/01 15:20:54 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000654_1&filter=stdout 13/11/01 15:20:54 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000654_1&filter=stderr 13/11/01 15:21:00 INFO mapred.JobClient: Task Id : attempt_201311011519_0001_m_000654_2, Status : FAILED java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) 13/11/01 15:21:00 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000654_2&filter=stdout 13/11/01 15:21:00 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000654_2&filter=stderr 13/11/01 15:21:12 INFO mapred.JobClient: Task Id : attempt_201311011519_0001_m_000653_0, Status : FAILED java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) 13/11/01 15:21:12 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000653_0&filter=stdout 13/11/01 15:21:12 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000653_0&filter=stderr 13/11/01 15:21:17 INFO mapred.JobClient: Task Id : attempt_201311011519_0001_m_000653_1, Status : FAILED java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) 13/11/01 15:21:17 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000653_1&filter=stdout 13/11/01 15:21:17 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000653_1&filter=stderr 13/11/01 15:21:23 INFO mapred.JobClient: Task Id : attempt_201311011519_0001_m_000653_2, Status : FAILED java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) 13/11/01 15:21:23 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000653_2&filter=stdout 13/11/01 15:21:23 WARN mapred.JobClient: Error reading task outputhttp://Master.Hadoop:50060/tasklog?plaintext=true&taskid=attempt_201311011519_0001_m_000653_2&filter=stderr 13/11/01 15:21:29 INFO mapred.JobClient: Job complete: job_201311011519_0001 13/11/01 15:21:29 INFO mapred.JobClient: Counters: 0 Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252) at org.wikipedia.miner.extraction.PageStep.run(Unknown Source) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.wikipedia.miner.extraction.DumpExtractor.run(Unknown Source) at org.wikipedia.miner.extraction.DumpExtractor.main(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
求解 FLINK 集群 Standalone 模式 高可用部署无法启动。
##### 根据教程部署的hdfs,zookeeper,flink 集群。 ##### HDFS , zookeeper,工作正常,flink-standalone 启动正常。在搭建HA集群时,集群启动未报错,查看jps时发现没有进程,查看日志出现如下内容。(启动顺序zkServer.sh start ==> start-dfs.sh, start-yarn.sh bin/start-cluster.sh ) ##### 集群环境: Centos 7.3, Hadoop-2.8.5, java 1.8, scala-2.12, flink-1.9.0-2.12, zookeeper 3.4.14 ``` 2019-09-05 21:38:02,658 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.9.0, Rev:9c32ed9, Date:19.08.2019 @ 16:16:55 UTC) 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: root 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: <no hadoop dependency found> 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.221-b11 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 989 MiBytes 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - No Hadoop Dependency available 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/opt/software/flink/log/flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/opt/software/flink/conf/log4j.properties 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/opt/software/flink/conf/logback.xml 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /opt/software/flink/conf 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --host 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - master 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --webui-port 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 8081 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /opt/software/flink/lib/flink-table_2.12-1.9.0.jar:/opt/software/flink/lib/flink-table-blink_2.12-1.9.0.jar:/opt/software/flink/lib/log4j-1.2.17.jar:/opt/software/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/software/flink/lib/flink-dist_2.12-1.9.0.jar::/usr/hadoop/hadoop/etc/hadoop: 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,663 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more [root@master log]# vim flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more ``` ##### 环境变量已经配置如下(/etc/profile): ``` export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:/lib export PATH=$PATH:$JAVA_HOME/bin:. export ZOOKEEPER_HOME=/opt/software/zookeeper export PATH=$PATH:$ZOOKEEPER_HOME/bin:. export HADOOP_HOME=/usr/hadoop/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:. export PYTHON_HOME=/usr/local/python3 export PATH=$PATH:$PYTHON_HOME/bin:. export SCALA_HOME=/usr/local/scala export PATH=$PATH:/usr/local/scala/bin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_HOME=$HADOOP_HOME export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:/usr/local/lib:/usr/hadoop/hadoop/lib/native ``` ##### flinl-conf.yaml HA配置: ``` high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: master:2181,slave02:2181,slave03:2181 high-availability.zookeeper.path.root: /flink high-availability.cluster-id: /cluster_one ``` ##### 因日志中提到如下信息,猜测可能是环境变量或者hadoop 依赖路径的问题。 ``` 2019-09-06 11:44:39,820 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. …… 2019-09-06 11:44:41,943 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. ^ Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. …… 2019-09-06 11:44:42,075 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. ``` ##### 接下来在flink-conf.yaml文件中添加hdfs配置。 ``` fs.hdfs.hadoopconf: /usr/hadoop/hadoop/etc/hadoop fs.hdfs.hdfsdefault: /usr/hadoop/hadoop/etc/hadoop/hdfs-default.xml fs.hdfs.hdfssite: /usr/hadoop/hadoop/etc/hadoop/hdfs-site.xml ``` ##### flink集群依然无法按启动,日志内容与之前没有差别。 ##### 小弟在此向社区各路大神求教,如果有遇到相关情况,是否有解决办法。 ##### 非常感谢 ##### ps:flink on yarn 搭建方式尚未尝试。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Vue + Spring Boot 项目实战(十四):用户认证方案与完善的访问拦截
本篇文章主要讲解 token、session 等用户认证方案的区别并分析常见误区,以及如何通过前后端的配合实现完善的访问拦截,为下一步权限控制的实现打下基础。
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
漫话:什么是平衡(AVL)树?这应该是把AVL树讲的最好的文章了
这篇文章通过对话的形式,由浅入深带你读懂 AVL 树,看完让你保证理解 AVL 树的各种操作,如果觉得不错,别吝啬你的赞哦。 1、若它的左子树不为空,则左子树上所有的节点值都小于它的根节点值。 2、若它的右子树不为空,则右子树上所有的节点值均大于它的根节点值。 3、它的左右子树也分别可以充当为二叉查找树。 例如: 例如,我现在想要查找数值为14的节点。由于二叉查找树的特性,我们可...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
开源并不是你认为的那些事
点击上方蓝字 关注我们开源之道导读所以 ————想要理清开源是什么?先要厘清开源不是什么,名正言顺是句中国的古代成语,概念本身的理解非常之重要。大部分生物多样性的起源,...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
《C++ Primer》学习笔记(六):C++模块设计——函数
专栏C++学习笔记 《C++ Primer》学习笔记/习题答案 总目录 https://blog.csdn.net/TeFuirnever/article/details/100700212 —————————————————————————————————————————————————————— 《C++ Primer》习题参考答案:第6章 - C++模块设计——函数 文章目录专栏C+...
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法不过,当我看了源代码之后这程序不到50
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问