code::blocks setting - compiler setting 菜单中没有global compiler setting 无法编译

1.code::blocks setting - compiler setting 菜单中没有global compiler setting 无法编译
2.找不到编译(构建按钮),无法编译
3.运行CbLauncher.exe 可以编译

图片说明

不解,求帮助!

**最后还是自己解决了!!!
原来是禁用了compiler 插件。。。
**
图片说明

ide

1个回答

qq_37321212
54-1100000 额,谢谢。但是这似乎对我的问题没什么帮助。我的软件不知道为什么连global compiler setting这个菜单都没有。
2 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Code::Blocks怎样自动识别Fortran源码格式?
本人菜鸟,在用Code::Blocks编译一个大型Fortran程序时遇到以下错误信息: -------------- Build: Debug in BernLib (compiler: PGI Fortran Compiler)--------------- pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\AMBCHN.f -o obj\Debug\LibBERN\FOR\AMBCHN.o pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\AMBSET.f -o obj\Debug\LibBERN\FOR\AMBSET.o pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\ARGLAT.f -o obj\Debug\LibBERN\FOR\ARGLAT.o pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\BESTN.f -o obj\Debug\LibBERN\FOR\BESTN.o pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\CCOR.f -o obj\Debug\LibBERN\FOR\CCOR.o pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\CORDUP.f -o obj\Debug\LibBERN\FOR\CORDUP.o pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\CURARC.f -o obj\Debug\LibBERN\FOR\CURARC.o pgfortran.exe -g -module obj\Debug\ -c ..\LibBERN\FOR\D_COMJPL.f90 -o obj\Debug\LibBERN\FOR\D_COMJPL.o PGF90-S-0021-Label field of continuation line is not blank (..\LibBERN\FOR\D_COMJPL.f90: 38) PGF90-S-0021-Label field of continuation line is not blank (..\LibBERN\FOR\D_COMJPL.f90: 43) PGF90-S-0044-Multiple declaration for symbol ksize (..\LibBERN\FOR\D_COMJPL.f90: 45) PGF90-S-0044-Multiple declaration for symbol irecsz (..\LibBERN\FOR\D_COMJPL.f90: 45) PGF90-S-0021-Label field of continuation line is not blank (..\LibBERN\FOR\D_COMJPL.f90: 48) PGF90-S-0044-Multiple declaration for symbol jplnam (..\LibBERN\FOR\D_COMJPL.f90: 46) PGF90-W-0024-CHARACTER or Hollerith constant truncated to fit data type (..\LibBERN\FOR\D_COMJPL.f90: 46) 0 inform, 1 warnings, 6 severes, 0 fatal for d_comjpl Process terminated with status 2 (0 minute(s), 2 second(s)) 6 error(s), 1 warning(s) (0 minute(s), 2 second(s)) 程序中既有固定格式源文件,也有自由格式源文件。在编译固定格式源程序的时候没有问题,但是编译自由格式的时候就出问题了。现在基本可以确定是因为编译器把自由格式的源程序当做是固定格式的来编译。 不知道怎么样设置Code::Blocks根据源程序后缀名自动识别固定格式还是自由格式?请各位大神赐教。
code::blocks无法使用crtl+shfit+c注释键
使用code::blocks编译器的时候,发现其他的快捷键都好使,只有crtl+shift+c快捷键无法使用,用的时候无法注释多行,但是会显示 HTML not found....(后面是一推文件名)。请问这要怎么解决呢??
同样的代码code::blocks中运行正常,VSCODE中提示collect2.exe: error: ld returned 1 exit statuscollect2.exe: error: ld returned 1 exit status
一段C代码,在code::blocks中运行正常,但在VSCODE中提示C:\Users\ADMINI~1\AppData\Local\Temp\cc3JOmOu.o:test.cpp:(.text+0x93): undefined reference to `gen_Data(void*)' collect2.exe: error: ld returned 1 exit status 代码如下: 头文件: ``` //kpid.h .... typedef struct _Para { ..... } Para; void gen_Data(void *p); ``` .c文件 ``` //kpid.c #include "kpid.h" void gen_Data(void *p) { Para *pa = (Para *)p; ...... } ``` main ``` //main.c #include "kpid.h" int main() { Para p; p.menber1 = 10; //初始化 .... gen_Data(&p) } ``` 1、这段程序在Code::Blocks17.2 中能正常运行,可输出期望结果 2、将struct定义,gen_Data() 函数定义/实现都放到main.c文件中时(代码不变,仅放的位置移动),在vscode中也能正常运行。 3、但如上代码,定义、实现放在单独的.h,.c文件中,在main.c中包含头文件时,运行错误。提示如下。(函数名的拼写是正确的,没错) C:\Users\ADMINI~1\AppData\Local\Temp\cc3JOmOu.o:test.cpp:(.text+0x93): undefined reference to `gen_Data(void*)' collect2.exe: error: ld returned 1 exit status
(初学者问题)Ubuntu 中Code::Blocks 开发QT
在Ubuntu 下我是使用软件商城装的Code::Blocks 在选择创建QT4 项目时 Qt's location 不知道怎么选择 (怎么找include和lib)![图片说明](https://img-ask.csdn.net/upload/201705/31/1496216880_828898.png)
code::blocks中把10^9定义成double型没有溢出定义成long double型却溢出
code::blocks(GNU GCC Complier)中把10^9定义成double型没有溢出,定义成long double型反倒溢出了。而10^9既没有超过double的范围,更没有超过Long double的范围,这是GNU GCC Complier的bug吗?我的code::blocks是16.01的,已经是最新版了。 #include <stdio.h> #include <math.h> double a; long double b; int main() { a=1E9; b=a; printf(" sizeof_Double=%d\n 2^63=%e\n a=%e\n",sizeof(double),pow(2,63),a); printf(" sizeof_LongDouble=%d\n 2^95=%e\n b=%e\n",sizeof(long double),pow(2,95),b); return 0; } ![图片说明](https://img-ask.csdn.net/upload/201605/12/1463022262_163434.png) ![图片说明](https://img-ask.csdn.net/upload/201605/12/1463022277_425762.png) ​关于如何输出b,如果使用%e,%f或%lf,有警告:format '%e'(或者'%f','%lf') expects argument of type 'double', but argument 4 has type 'long double' [-Wformat=].如果使用小写的l,有警告:unknown conversion type character 0xa in format [-Wformat=].too many arguments for format [-Wformat-extra-args].如果使用大写的L,有警告:unknown conversion type character 'L' in format [-Wformat=]. too many arguments for format [-Wformat-extra-args] 我的处理器也是64位的,指令集 x86, x86-64, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, FMA, AES ![图片说明](https://img-ask.csdn.net/upload/201605/12/1463022355_496475.png)
DataNode未报错直接关闭
启动得好好的,过一段时间就掉了。而且没有任何异常信息,这是为什么呢? 这是后面的一些日志,完全没有抛出异常。 ``` 2019-11-02 16:13:13,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 to 172.31.19.252:50010 2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 (numBytes=109043) to /172.31.19.252:50010 2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742042_1218 (numBytes=197986) to /172.31.19.252:50010 2019-11-02 16:13:16,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010 2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010 2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010 of size 58160 2019-11-02 16:13:16,985 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010 of size 2178774 2019-11-02 16:13:16,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 to 172.31.19.252:50010 2019-11-02 16:13:16,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 to 172.31.19.252:50010 2019-11-02 16:13:17,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 (numBytes=34604) to /172.31.19.252:50010 2019-11-02 16:13:17,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 (numBytes=780664) to /172.31.19.252:50010 2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 to 172.31.19.252:50010 2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 to 172.31.19.252:50010 2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 (numBytes=6052) to /172.31.19.252:50010 2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 (numBytes=592319) to /172.31.19.252:50010 2019-11-02 16:13:44,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 src: /172.31.20.57:51732 dest: /172.31.23.3:50010 2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51732, dest: /172.31.23.3:50010, bytes: 1108073, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, duration(ns): 9331035 2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,223 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 src: /172.31.20.57:51736 dest: /172.31.23.3:50010 2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51736, dest: /172.31.23.3:50010, bytes: 20744, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, duration(ns): 822959 2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,240 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 src: /172.31.20.57:51738 dest: /172.31.23.3:50010 2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51738, dest: /172.31.23.3:50010, bytes: 53464, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, duration(ns): 834208 2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,250 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 src: /172.31.20.57:51740 dest: /172.31.23.3:50010 2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51740, dest: /172.31.23.3:50010, bytes: 60686, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, duration(ns): 836219 2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,139 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 src: /172.31.20.57:51748 dest: /172.31.23.3:50010 2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51748, dest: /172.31.23.3:50010, bytes: 914311, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, duration(ns): 7451340 2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,179 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 src: /172.31.20.57:51752 dest: /172.31.23.3:50010 2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51752, dest: /172.31.23.3:50010, bytes: 706710, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, duration(ns): 2666689 2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,192 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 src: /172.31.20.57:51754 dest: /172.31.23.3:50010 2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51754, dest: /172.31.23.3:50010, bytes: 186260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, duration(ns): 1335836 2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,617 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 src: /172.31.20.57:51756 dest: /172.31.23.3:50010 2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51756, dest: /172.31.23.3:50010, bytes: 1768012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, duration(ns): 8602898 2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:46,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010 2019-11-02 16:13:46,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010 of size 205389 2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 to 172.31.19.252:50010 2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 to 172.31.19.252:50010 2019-11-02 16:13:47,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 (numBytes=20744) to /172.31.19.252:50010 2019-11-02 16:13:47,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 (numBytes=1108073) to /172.31.19.252:50010 2019-11-02 16:13:47,315 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249 src: /172.31.20.57:51766 dest: /172.31.23.3:50010 2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51766, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, duration(ns): 3408777 2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 src: /172.31.20.57:51768 dest: /172.31.23.3:50010 2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51768, dest: /172.31.23.3:50010, bytes: 36519, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, duration(ns): 1284246 2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 src: /172.31.20.57:51776 dest: /172.31.23.3:50010 2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51776, dest: /172.31.23.3:50010, bytes: 279012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, duration(ns): 2573122 2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 src: /172.31.20.57:51778 dest: /172.31.23.3:50010 2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51778, dest: /172.31.23.3:50010, bytes: 1344870, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, duration(ns): 3770082 2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:48,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 src: /172.31.20.57:51780 dest: /172.31.23.3:50010 2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51780, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, duration(ns): 2365213 2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:48,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257 src: /172.31.20.57:51782 dest: /172.31.23.3:50010 2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51782, dest: /172.31.23.3:50010, bytes: 99555, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, duration(ns): 1140563 2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,062 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259 src: /172.31.20.57:51786 dest: /172.31.23.3:50010 2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51786, dest: /172.31.23.3:50010, bytes: 20998, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, duration(ns): 823110 2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262 src: /172.31.20.57:51792 dest: /172.31.23.3:50010 2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51792, dest: /172.31.23.3:50010, bytes: 224277, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, duration(ns): 1129868 2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,511 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263 src: /172.31.20.57:51794 dest: /172.31.23.3:50010 2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51794, dest: /172.31.23.3:50010, bytes: 780664, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, duration(ns): 2377601 2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010 2019-11-02 16:13:49,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010 2019-11-02 16:13:49,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010 of size 232248 2019-11-02 16:13:49,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010 of size 434678 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 to 172.31.19.252:50010 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 to 172.31.19.252:50010 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073 for deletion 2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 (numBytes=53464) to /172.31.19.252:50010 2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073 2019-11-02 16:13:50,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 (numBytes=60686) to /172.31.19.252:50010 2019-11-02 16:13:51,310 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269 src: /172.31.19.252:46180 dest: /172.31.23.3:50010 2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.19.252:46180, dest: /172.31.23.3:50010, bytes: 94, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, duration(ns): 2826729 2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:52,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010 2019-11-02 16:13:52,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010 2019-11-02 16:13:52,986 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010 of size 1033299 2019-11-02 16:13:52,987 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010 of size 892808 2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 to 172.31.19.252:50010 2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 to 172.31.19.252:50010 2019-11-02 16:13:53,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 (numBytes=914311) to /172.31.19.252:50010 2019-11-02 16:13:53,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 (numBytes=706710) to /172.31.19.252:50010 2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 to 172.31.19.252:50010 2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 to 172.31.19.252:50010 2019-11-02 16:13:56,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 (numBytes=186260) to /172.31.19.252:50010 2019-11-02 16:13:56,025 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010 2019-11-02 16:13:56,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010 2019-11-02 16:13:56,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010 of size 36455 2019-11-02 16:13:56,040 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010 of size 1801469 2019-11-02 16:13:56,068 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 (numBytes=1768012) to /172.31.19.252:50010 2019-11-02 16:13:58,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010 2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010 2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010 of size 19827 2019-11-02 16:13:58,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 to 172.31.19.252:50010 2019-11-02 16:13:59,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 (numBytes=36519) to /172.31.19.252:50010 2019-11-02 16:13:59,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010 of size 267634 2019-11-02 16:14:01,833 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273 src: /172.31.23.3:50512 dest: /172.31.23.3:50010 2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.23.3:50512, dest: /172.31.23.3:50010, bytes: 1029, op: HDFS_WRITE, cliID: DFSClient_attempt_1572710114754_0009_m_000000_0_-2142389405_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, duration(ns): 3798130 2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, type=LAST_IN_PIPELINE terminating 2019-11-02 16:14:01,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010 2019-11-02 16:14:01,994 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010 of size 375618 2019-11-02 16:14:01,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 to 172.31.19.252:50010 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010 of size 1765905 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 to 172.31.19.252:50010 2019-11-02 16:14:02,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 (numBytes=279012) to /172.31.19.252:50010 2019-11-02 16:14:02,008 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 (numBytes=1344870) to /172.31.19.252:50010 2019-11-02 16:14:04,999 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 because on-disk length 990927 is shorter than NameNode recorded length 9223372036854775807 2019-11-02 16:14:08,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 to 172.31.19.252:50010 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 to 172.31.19.252:50010 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079 for deletion 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 (numBytes=375618) to /172.31.19.252:50010 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 (numBytes=36455) to /172.31.19.252:50010 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079 2019-11-02 16:14:11,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 to 172.31.19.252:50010 2019-11-02 16:14:11,007 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 (numBytes=25496) to /172.31.19.252:50010 2019-11-02 17:01:35,904 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-793432708-172.31.20.57-1572709584342 Total blocks: 88, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0 2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x15733bb21ccd9a44, containing 1 storage report(s), of which we sent 1. The reports had 88 total blocks and used 1 RPC(s). This took 1 msec to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-793432708-172.31.20.57-1572709584342 ```
code:blockS中用C语言编程时怎么设置才能用C11标准
code:blocks中使用C语言时不支持C11标准,所以不能使用带_s结尾的函数,如strnlen_s(). strcpy_s()等,那么该怎么在CODE:blocks中设置才能使用这些函数,或使其支持C11呢?
code:Blocks搭建的OC环境运行数组小问题
在win7下用code:Blocks搭建的OC环境,运行下面程序: NSArray *arr1 = [[NSArray alloc]initWithObjects:@"aa",@"bb",nil]; //NSArray *arr1 = @[@"aa",@"bb"]; NSLog(@"arr1 = %@",arr1); 运行正常 ,但是把第一句替换为注释的那条语句竟然编译不通过 ,为什么?
一些编译器的预定义宏?
请问Devcpp和code::blocks的预定义宏分别是什么?
code::blocks + wxpack+tdm配置问题
系统:win8 ,codeblocks:12.11(不带编译器版本), wxPack v2.812.01 , tdm64-gcc-4.8.1-3.exe(在线安装)创建wxWidgets项目(选中了Use wxWidgets DLL选项和Enable unicode选项)调试时提示cannotfind -lwxmsw28ud_core等等,另外还有28个警告 codeblocks:12.11(不带编译器版本), wxPack v2.812.01 , tdm64-gcc-4.8.1-3组合怎样配置?
hexo博客无法显示背景图片
hexo框架写的博客,主题是next,本地预览可以看到背景,但是真实网站看不到,已经试过先hexo c之后再上传,还是没有用 hexo的配置文件 ``` # Hexo Configuration ## Docs: https://hexo.io/docs/configuration.html ## Source: https://github.com/hexojs/hexo/ # Site title: subtitle: description: 这是一个搞着玩的博客,功能暂未完全完善,将就着看吧您内~! keywords: author: 肖肖 language: zh-Hans timezone: # URL ## If your site is put in a subdirectory, set url as 'http://yoursite.com/child' and root as '/child/' url: http://yoursite.com root: / permalink: :year/:month/:day/:title/ permalink_defaults: # Directory source_dir: source public_dir: public tag_dir: tags archive_dir: archives category_dir: categories code_dir: downloads/code i18n_dir: :lang skip_render: # Writing new_post_name: :title.md # File name of new posts default_layout: post titlecase: false # Transform title into titlecase external_link: true # Open external links in new tab filename_case: 0 render_drafts: false post_asset_folder: true relative_link: false future: true highlight: enable: true line_number: true auto_detect: false tab_replace: # Home page setting # path: Root path for your blogs index page. (default = '') # per_page: Posts displayed per page. (0 = disable pagination) # order_by: Posts order. (Order by date descending by default) index_generator: path: '' per_page: 10 order_by: -date # Category & Tag default_category: uncategorized category_map: tag_map: # Date / Time format ## Hexo uses Moment.js to parse and display date ## You can customize the date format as defined in ## http://momentjs.com/docs/#/displaying/format/ date_format: YYYY-MM-DD time_format: HH:mm:ss # Pagination ## Set per_page to 0 to disable pagination per_page: 10 pagination_dir: page # Extensions ## Plugins: https://hexo.io/plugins/ ## Themes: https://hexo.io/themes/ theme: next # Deployment ## Docs: https://hexo.io/docs/deployment.html deploy: type: git repo: branch: master ``` 然后是next的配置文件 ``` # =============================================================== # ========================= ATTENTION! ========================== # =============================================================== # NexT repository is moving here: https://github.com/theme-next # =============================================================== # It's rebase to v6.0.0 and future maintenance will resume there # =============================================================== # --------------------------------------------------------------- # Theme Core Configuration Settings # --------------------------------------------------------------- # Set to true, if you want to fully override the default configuration. # Useful if you don't want to inherit the theme _config.yml configurations. override: false # --------------------------------------------------------------- # Site Information Settings # --------------------------------------------------------------- # To get or check favicons visit: https://realfavicongenerator.net # Put your favicons into `hexo-site/source/` (recommend) or `hexo-site/themes/next/source/images/` directory. # Default NexT favicons placed in `hexo-site/themes/next/source/images/` directory. # And if you want to place your icons in `hexo-site/source/` root directory, you must remove `/images` prefix from pathes. # For example, you put your favicons into `hexo-site/source/images` directory. # Then need to rename & redefine they on any other names, otherwise icons from Next will rewrite your custom icons in Hexo. favicon: small: /images/favicon-16x16-next.png medium: /images/favicon.ico apple_touch_icon: /images/apple-touch-icon-next.png safari_pinned_tab: /images/logo.svg #android_manifest: /images/manifest.json #ms_browserconfig: /images/browserconfig.xml # Set default keywords (Use a comma to separate) keywords: "Hexo, NexT" # Set rss to false to disable feed link. # Leave rss as empty to use site's feed link. # Set rss to specific value if you have burned your feed already. rss: footer: # Specify the date when the site was setup. # If not defined, current year will be used. since: 2019 # Icon between year and copyright info. icon: user # If not defined, will be used `author` from Hexo main config. copyright: # ------------------------------------------------------------- # Hexo link (Powered by Hexo). powered: true theme: # Theme & scheme info link (Theme - NexT.scheme). enable: true # Version info of NexT after scheme info (vX.X.X). version: true # ------------------------------------------------------------- # Any custom text can be defined here. #custom_text: Hosted by <a target="_blank" href="https://pages.github.com">GitHub Pages</a> # --------------------------------------------------------------- # SEO Settings # --------------------------------------------------------------- # Canonical, set a canonical link tag in your hexo, you could use it for your SEO of blog. # See: https://support.google.com/webmasters/answer/139066 # Tips: Before you open this tag, remember set up your URL in hexo _config.yml ( ex. url: http://yourdomain.com ) canonical: true # Change headers hierarchy on site-subtitle (will be main site description) and on all post/pages titles for better SEO-optimization. seo: false # If true, will add site-subtitle to index page, added in main hexo config. # subtitle: Subtitle index_with_subtitle: false # --------------------------------------------------------------- # Menu Settings # --------------------------------------------------------------- # When running the site in a subdirectory (e.g. domain.tld/blog), remove the leading slash from link value (/archives -> archives). # Usage: `Key: /link/ || icon` # Key is the name of menu item. If translate for this menu will find in languages - this translate will be loaded; if not - Key name will be used. Key is case-senstive. # Value before `||` delimeter is the target link. # Value after `||` delimeter is the name of FontAwesome icon. If icon (with or without delimeter) is not specified, question icon will be loaded. menu: home: / || home #about: /about/ || user #tags: /tags/ || tags #categories: /categories/ || th archives: /archives/ || archive #schedule: /schedule/ || calendar #sitemap: /sitemap.xml || sitemap #commonweal: /404/ || heartbeat # Enable/Disable menu icons. menu_icons: enable: true # --------------------------------------------------------------- # Scheme Settings # --------------------------------------------------------------- # Schemes #scheme: Muse #scheme: Mist #scheme: Pisces scheme: Gemini # --------------------------------------------------------------- # Sidebar Settings # --------------------------------------------------------------- # Social Links. # Usage: `Key: permalink || icon` # Key is the link label showing to end users. # Value before `||` delimeter is the target permalink. # Value after `||` delimeter is the name of FontAwesome icon. If icon (with or without delimeter) is not specified, globe icon will be loaded. #social: #GitHub: https://github.com/yourname || github #E-Mail: mailto:yourname@gmail.com || envelope #Google: https://plus.google.com/yourname || google #Twitter: https://twitter.com/yourname || twitter #FB Page: https://www.facebook.com/yourname || facebook #VK Group: https://vk.com/yourname || vk #StackOverflow: https://stackoverflow.com/yourname || stack-overflow #YouTube: https://youtube.com/yourname || youtube #Instagram: https://instagram.com/yourname || instagram #Skype: skype:yourname?call|chat || skype social_icons: enable: true icons_only: false transition: false # Blog rolls links_icon: link links_title: Links links_layout: block #links_layout: inline #links: #Title: http://example.com/ # Sidebar Avatar # in theme directory(source/images): /images/avatar.gif # in site directory(source/uploads): /uploads/avatar.gif avatar: /images/avatar.png # Table Of Contents in the Sidebar toc: enable: true # Automatically add list number to toc. number: true # If true, all words will placed on next lines if header width longer then sidebar width. wrap: false # Creative Commons 4.0 International License. # http://creativecommons.org/ # Available: by | by-nc | by-nc-nd | by-nc-sa | by-nd | by-sa | zero #creative_commons: by-nc-sa #creative_commons: sidebar: # Sidebar Position, available value: left | right (only for Pisces | Gemini). position: left #position: right # Sidebar Display, available value (only for Muse | Mist): # - post expand on posts automatically. Default. # - always expand for all pages automatically # - hide expand only when click on the sidebar toggle icon. # - remove Totally remove sidebar including sidebar toggle. display: post #display: always #display: hide #display: remove # Sidebar offset from top menubar in pixels (only for Pisces | Gemini). offset: 12 # Back to top in sidebar (only for Pisces | Gemini). b2t: false # Scroll percent label in b2t button. scrollpercent: true # Enable sidebar on narrow view (only for Muse | Mist). onmobile: true # --------------------------------------------------------------- # Post Settings # --------------------------------------------------------------- # Automatically scroll page to section which is under <!-- more --> mark. scroll_to_more: true # Automatically saving scroll position on each post/page in cookies. save_scroll: false # Automatically excerpt description in homepage as preamble text. excerpt_description: true # Automatically Excerpt. Not recommend. # Please use <!-- more --> in the post to control excerpt accurately. auto_excerpt: enable: true length: 0 # Post meta display settings post_meta: item_text: true created_at: true updated_at: false categories: true # Post wordcount display settings # Dependencies: https://github.com/willin/hexo-wordcount post_wordcount: item_text: true wordcount: true min2read: true totalcount: false separated_meta: true # Wechat Subscriber #wechat_subscriber: #enabled: true #qcode: /path/to/your/wechatqcode ex. /uploads/wechat-qcode.jpg #description: ex. subscribe to my blog by scanning my public wechat account # Reward #reward_comment: Donate comment here #wechatpay: /images/wechatpay.jpg #alipay: /images/alipay.jpg #bitcoin: /images/bitcoin.png # Declare license on posts post_copyright: enable: false license: CC BY-NC-SA 3.0 license_url: https://creativecommons.org/licenses/by-nc-sa/3.0/ # --------------------------------------------------------------- # Misc Theme Settings # --------------------------------------------------------------- # Reduce padding / margin indents on devices with narrow width. mobile_layout_economy: false # Android Chrome header panel color ($black-deep). android_chrome_color: "#222" # Custom Logo. # !!Only available for Default Scheme currently. # Options: # enabled: [true/false] - Replace with specific image # image: url-of-image - Images's url custom_logo: enabled: true image: # Code Highlight theme # Available value: # normal | night | night eighties | night blue | night bright # https://github.com/chriskempson/tomorrow-theme highlight_theme: normal # --------------------------------------------------------------- # Font Settings # - Find fonts on Google Fonts (https://www.google.com/fonts) # - All fonts set here will have the following styles: # light, light italic, normal, normal italic, bold, bold italic # - Be aware that setting too much fonts will cause site running slowly # - Introduce in 5.0.1 # --------------------------------------------------------------- # CAUTION! Safari Version 10.1.2 bug: https://github.com/iissnan/hexo-theme-next/issues/1844 # To avoid space between header and sidebar in Pisces / Gemini themes recommended to use Web Safe fonts for `global` (and `logo`): # Arial | Tahoma | Helvetica | Times New Roman | Courier New | Verdana | Georgia | Palatino | Garamond | Comic Sans MS | Trebuchet MS # --------------------------------------------------------------- font: enable: true # Uri of fonts host. E.g. //fonts.googleapis.com (Default). host: # Font options: # `external: true` will load this font family from `host` above. # `family: Times New Roman`. Without any quotes. # `size: xx`. Use `px` as unit. # Global font settings used on <body> element. global: external: true family: Lato size: # Font settings for Headlines (h1, h2, h3, h4, h5, h6). # Fallback to `global` font settings. headings: external: true family: size: # Font settings for posts. # Fallback to `global` font settings. posts: external: true family: # Font settings for Logo. # Fallback to `global` font settings. logo: external: true family: size: # Font settings for <code> and code blocks. codes: external: true family: size: # --------------------------------------------------------------- # Third Party Services Settings # --------------------------------------------------------------- # MathJax Support mathjax: enable: false per_page: false cdn: //cdn.bootcss.com/mathjax/2.7.1/latest.js?config=TeX-AMS-MML_HTMLorMML # Han Support docs: https://hanzi.pro/ han: false # Swiftype Search API Key #swiftype_key: # Baidu Analytics ID #baidu_analytics: # Duoshuo ShortName #duoshuo_shortname: # Disqus disqus: enable: false shortname: count: true # Hypercomments #hypercomments_id: # changyan changyan: enable: false appid: appkey: # Valine. # You can get your appid and appkey from https://leancloud.cn # more info please open https://valine.js.org valine: enable: true appid: appkey: notify: false # mail notifier , https://github.com/xCss/Valine/wiki verify: false # Verification code placeholder: 高冷的你说点什么吧( ´・・)ノ(._.`) avatar: mm # gravatar style guest_info: nick,mail,link # custom comment header pageSize: 10 # pagination size # Support for youyan comments system. # You can get your uid from http://www.uyan.cc #youyan_uid: your uid # Support for LiveRe comments system. # You can get your uid from https://livere.com/insight/myCode (General web site) #livere_uid: your uid # Gitment # Introduction: https://imsun.net/posts/gitment-introduction/ # You can get your Github ID from https://api.github.com/users/<Github username> gitment: enable: false mint: true # RECOMMEND, A mint on Gitment, to support count, language and proxy_gateway count: true # Show comments count in post meta area lazy: false # Comments lazy loading with a button cleanly: false # Hide 'Powered by ...' on footer, and more language: # Force language, or auto switch by theme github_user: # MUST HAVE, Your Github ID github_repo: # MUST HAVE, The repo you use to store Gitment comments client_id: # MUST HAVE, Github client id for the Gitment client_secret: # EITHER this or proxy_gateway, Github access secret token for the Gitment proxy_gateway: # Address of api proxy, See: https://github.com/aimingoo/intersect redirect_protocol: # Protocol of redirect_uri with force_redirect_protocol when mint enabled # Baidu Share # Available value: # button | slide # Warning: Baidu Share does not support https. #baidushare: ## type: button # Share # This plugin is more useful in China, make sure you known how to use it. # And you can find the use guide at official webiste: http://www.jiathis.com/. # Warning: JiaThis does not support https. #jiathis: ##uid: Get this uid from http://www.jiathis.com/ #add_this_id: # Share duoshuo_share: true # NeedMoreShare2 # This plugin is a pure javascript sharing lib which is useful in China. # See: https://github.com/revir/need-more-share2 # Also see: https://github.com/DzmVasileusky/needShareButton # iconStyle: default | box # boxForm: horizontal | vertical # position: top / middle / bottom + Left / Center / Right networks: Weibo,Wechat,Douban,QQZone,Twitter,Linkedin,Mailto,Reddit, # Delicious,StumbleUpon,Pinterest,Facebook,GooglePlus,Slashdot, # Technorati,Posterous,Tumblr,GoogleBookmarks,Newsvine, # Evernote,Friendfeed,Vkontakte,Odnoklassniki,Mailru needmoreshare2: enable: false postbottom: enable: false options: iconStyle: box boxForm: horizontal position: bottomCenter networks: Weibo,Wechat,Douban,QQZone,Twitter,Facebook float: enable: false options: iconStyle: box boxForm: horizontal position: middleRight networks: Weibo,Wechat,Douban,QQZone,Twitter,Facebook # Google Webmaster tools verification setting # See: https://www.google.com/webmasters/ #google_site_verification: # Google Analytics #google_analytics: # Bing Webmaster tools verification setting # See: https://www.bing.com/webmaster/ #bing_site_verification: # Yandex Webmaster tools verification setting # See: https://webmaster.yandex.ru/ #yandex_site_verification: # CNZZ count #cnzz_siteid: # Application Insights # See https://azure.microsoft.com/en-us/services/application-insights/ # application_insights: # Make duoshuo show UA # user_id must NOT be null when admin_enable is true! # you can visit http://dev.duoshuo.com get duoshuo user id. duoshuo_info: ua_enable: true admin_enable: false user_id: 0 #admin_nickname: Author # Post widgets & FB/VK comments settings. # --------------------------------------------------------------- # Facebook SDK Support. # https://github.com/iissnan/hexo-theme-next/pull/410 facebook_sdk: enable: false app_id: #<app_id> fb_admin: #<user_id> like_button: #true webmaster: #true # Facebook comments plugin # This plugin depends on Facebook SDK. # If facebook_sdk.enable is false, Facebook comments plugin is unavailable. facebook_comments_plugin: enable: false num_of_posts: 10 # min posts num is 1 width: 100% # default width is 550px scheme: light # default scheme is light (light or dark) # VKontakte API Support. # To get your AppID visit https://vk.com/editapp?act=create vkontakte_api: enable: false app_id: #<app_id> like: true comments: true num_of_posts: 10 # Star rating support to each article. # To get your ID visit https://widgetpack.com rating: enable: false id: #<app_id> color: fc6423 # --------------------------------------------------------------- # Show number of visitors to each article. # You can visit https://leancloud.cn get AppID and AppKey. leancloud_visitors: enable: false app_id: #<app_id> app_key: #<app_key> # Another tool to show number of visitors to each article. # visit https://console.firebase.google.com/u/0/ to get apiKey and projectId # visit https://firebase.google.com/docs/firestore/ to get more information about firestore firestore: enable: false collection: articles #required, a string collection name to access firestore database apiKey: #required projectId: #required bluebird: false #enable this if you want to include bluebird 3.5.1(core version) Promise polyfill # Show PV/UV of the website/page with busuanzi. # Get more information on http://ibruce.info/2015/04/04/busuanzi/ busuanzi_count: # count values only if the other configs are false enable: true # custom uv span for the whole site site_uv: true site_uv_header: 本站访问人数 site_uv_footer: 人次 # custom pv span for the whole site site_pv: true site_pv_header: 本站访问量 site_pv_footer: 次 # custom pv span for one page only page_pv: true page_pv_header: 本文阅读量 page_pv_footer: 次 # Tencent analytics ID # tencent_analytics: # Tencent MTA ID # tencent_mta: # Enable baidu push so that the blog will push the url to baidu automatically which is very helpful for SEO baidu_push: false # Google Calendar # Share your recent schedule to others via calendar page # # API Documentation: # https://developers.google.com/google-apps/calendar/v3/reference/events/list calendar: enable: false calendar_id: <required> api_key: <required> orderBy: startTime offsetMax: 24 offsetMin: 4 timeZone: showDeleted: false singleEvents: true maxResults: 250 # Algolia Search algolia_search: enable: false hits: per_page: 10 labels: input_placeholder: Search for Posts hits_empty: "We didn't find any results for the search: ${query}" hits_stats: "${hits} results found in ${time} ms" # Local search # Dependencies: https://github.com/flashlab/hexo-generator-search local_search: enable: true # if auto, trigger search by changing input # if manual, trigger search by pressing enter key or search button trigger: auto # show top n results per article, show all results by setting to -1 top_n_per_article: 1 # --------------------------------------------------------------- # Tags Settings # --------------------------------------------------------------- # External URL with BASE64 encrypt & decrypt. # Usage: {% exturl text url "title" %} # Alias: {% extlink text url "title" %} exturl: false # Note tag (bs-callout). note: # Note tag style values: # - simple bs-callout old alert style. Default. # - modern bs-callout new (v2-v3) alert style. # - flat flat callout style with background, like on Mozilla or StackOverflow. # - disabled disable all CSS styles import of note tag. style: simple icons: false border_radius: 3 # Offset lighter of background in % for modern and flat styles (modern: -12 | 12; flat: -18 | 6). # Offset also applied to label tag variables. This option can work with disabled note tag. light_bg_offset: 0 # Label tag. label: true # Tabs tag. tabs: enable: true transition: tabs: false labels: true border_radius: 0 #! --------------------------------------------------------------- #! DO NOT EDIT THE FOLLOWING SETTINGS #! UNLESS YOU KNOW WHAT YOU ARE DOING #! --------------------------------------------------------------- # Use velocity to animate everything. motion: enable: true async: false transition: # Transition variants: # fadeIn | fadeOut | flipXIn | flipXOut | flipYIn | flipYOut | flipBounceXIn | flipBounceXOut | flipBounceYIn | flipBounceYOut # swoopIn | swoopOut | whirlIn | whirlOut | shrinkIn | shrinkOut | expandIn | expandOut # bounceIn | bounceOut | bounceUpIn | bounceUpOut | bounceDownIn | bounceDownOut | bounceLeftIn | bounceLeftOut | bounceRightIn | bounceRightOut # slideUpIn | slideUpOut | slideDownIn | slideDownOut | slideLeftIn | slideLeftOut | slideRightIn | slideRightOut # slideUpBigIn | slideUpBigOut | slideDownBigIn | slideDownBigOut | slideLeftBigIn | slideLeftBigOut | slideRightBigIn | slideRightBigOut # perspectiveUpIn | perspectiveUpOut | perspectiveDownIn | perspectiveDownOut | perspectiveLeftIn | perspectiveLeftOut | perspectiveRightIn | perspectiveRightOut post_block: fadeIn post_header: slideDownIn post_body: slideDownIn coll_header: slideLeftIn # Only for Pisces | Gemini. sidebar: slideUpIn # Fancybox fancybox: true # Progress bar in the top during page loading. pace: true # Themes list: #pace-theme-big-counter #pace-theme-bounce #pace-theme-barber-shop #pace-theme-center-atom #pace-theme-center-circle #pace-theme-center-radar #pace-theme-center-simple #pace-theme-corner-indicator #pace-theme-fill-left #pace-theme-flash #pace-theme-loading-bar #pace-theme-mac-osx #pace-theme-minimal # For example # pace_theme: pace-theme-center-simple pace_theme: pace-theme-minimal # Canvas-nest canvas_nest: true # three_waves three_waves: false # canvas_lines canvas_lines: false # canvas_sphere canvas_sphere: false # Only fit scheme Pisces # Canvas-ribbon # size: The width of the ribbon. # alpha: The transparency of the ribbon. # zIndex: The display level of the ribbon. canvas_ribbon: enable: false size: 300 alpha: 0.6 zIndex: -1 # Script Vendors. # Set a CDN address for the vendor you want to customize. # For example # jquery: https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js # Be aware that you should use the same version as internal ones to avoid potential problems. # Please use the https protocol of CDN files when you enable https on your site. vendors: # Internal path prefix. Please do not edit it. _internal: lib # Internal version: 2.1.3 jquery: # Internal version: 2.1.5 # See: http://fancyapps.com/fancybox/ fancybox: fancybox_css: # Internal version: 1.0.6 # See: https://github.com/ftlabs/fastclick fastclick: # Internal version: 1.9.7 # See: https://github.com/tuupola/jquery_lazyload lazyload: # Internal version: 1.2.1 # See: http://VelocityJS.org velocity: # Internal version: 1.2.1 # See: http://VelocityJS.org velocity_ui: # Internal version: 0.7.9 # See: https://faisalman.github.io/ua-parser-js/ ua_parser: # Internal version: 4.6.2 # See: http://fontawesome.io/ fontawesome: # Internal version: 1 # https://www.algolia.com algolia_instant_js: algolia_instant_css: # Internal version: 1.0.2 # See: https://github.com/HubSpot/pace # Or use direct links below: # pace: //cdn.bootcss.com/pace/1.0.2/pace.min.js # pace_css: //cdn.bootcss.com/pace/1.0.2/themes/blue/pace-theme-flash.min.css pace: pace_css: # Internal version: 1.0.0 # https://github.com/hustcc/canvas-nest.js canvas_nest: # three three: # three_waves # https://github.com/jjandxa/three_waves three_waves: # three_waves # https://github.com/jjandxa/canvas_lines canvas_lines: # three_waves # https://github.com/jjandxa/canvas_sphere canvas_sphere: # Internal version: 1.0.0 # https://github.com/zproo/canvas-ribbon canvas_ribbon: # Internal version: 3.3.0 # https://github.com/ethantw/Han han: # needMoreShare2 # https://github.com/revir/need-more-share2 needMoreShare2: # Assets css: css js: js images: images # Online contact daovoice: true daovoice_app_id: # 这里填你刚才获得的 app_id # Theme version live2d: enable: false model: z16 bottom: -30 version: 5.1.4 ``` 已经折腾了两天了,就差回档了,感觉回档也没有用,小白在此感谢各位大佬了 然后我修改背景图片修改的是Blog\themes\next\source\css\_custom\custom.styl文件 ``` // Custom styles. // 主页文章添加阴影效果 .post { margin-top: 60px; margin-bottom: 60px; padding: 25px; -webkit-box-shadow: 0 0 5px rgba(202, 203, 203, .5); -moz-box-shadow: 0 0 5px rgba(202, 203, 204, .5); } .site-meta { background: $orange; //天空的颜色,和我的眼镜是绝配 } // 鼠标样式 * { cursor: url(""),auto!important } :active { cursor: url(""),auto!important } // Custom styles. body { background-image: url(/images/background.jpg); background-attachment: fixed; background-repeat: no-repeat; background-size: cover; //改变背景色和透明度 .main-inner { padding: 25px; opacity: 0.85; border-radius: 10px; right: 0 !important; top: 0 !important; bottom: 0 !important; } } body .main { margin-bottom: 0px; } ```
code blocks如何退出调试功能?-?
code blocks如何退出调试功能?-? 打开了之后不知道怎么整了哈哈
Code::Blocks Windows环境下编译HelloWorld程序报错,求高手支招!
很简单的Hello World 程序,编译时提示: D:\MinGW\include\c++\3.4.5\bits\codecvt.h|475 这个文件中引用的 bits/codecvt_specializations.h 文件找不到 请问是哪里出了问题? ​ ![CSDN移动问答][1] ![CSDN移动问答][2] [1]: http://f.hiphotos.baidu.com/zhidao/pic/item/63d9f2d3572c11df047ef3c0612762d0f603c2a2.jpg [2]: http://f.hiphotos.baidu.com/zhidao/pic/item/9922720e0cf3d7cabed67095f01fbe096b63a935.jpg
为什么code:blocks中bebug总是用不了,下面出现什么no bonds,左边也没光标
![图片说明](https://img-ask.csdn.net/upload/201811/01/1541061343_86560.png)
code blocks里面代码自动补全,后面本来有提示,为什么我的没有
code blocks里面代码自动补全,后面本来有提示,为什么我的没有? 我看教学视频里面的就有 ![图片说明](https://img-ask.csdn.net/upload/201909/01/1567346943_8740.png) ![图片说明](https://img-ask.csdn.net/upload/201909/01/1567346982_636032.png) 下面是我的,就没有
安装tensorflow-gpu后运行程序出现An error ocurred while starting the kernel问题
tensorflow2.0,cuda10.2,cudnn7.6,使用improt语句没有问题, 但是在执行model.add()语句时报错 2019-12-29 17:01:21.546770: F .\tensorflow/core/kernels/random_op_gpu.h:227] Non-OK-status: GpuLaunchKernel(FillPhiloxRandomKernelLaunch<Distribution>, num_blocks, block_size, 0, d.stream(), gen, data, size, dist) status: Internal: invalid device function 没有找到合适的解决方法,在此求助!感谢!
Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error,同时无法put文件到hdfs
hadoop版本是3.1,ubuntu是18, 问题一:浏览hdfs目录显示: Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error 问题二: namenode的log如下: ``` 438 WARN org.eclipse.jetty.servlet.ServletHandler: Error for /webhdfs/v1/ java.lang.NoClassDefFoundError: javax/activation/DataSource at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:457) at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65) at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133) at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85) at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156) at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319) at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:236) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:186) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:146) at javax.xml.bind.ContextFinder.find(ContextFinder.java:350) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:446) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:409) at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.<init>(WadlApplicationContextImpl.java:103) at com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:100) at com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169) at com.sun.jersey.server.impl.application.RootResourceUriRules.<init>(RootResourceUriRules.java:106) at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1359) at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180) at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799) at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795) at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795) at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:790) at com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:509) at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339) at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605) at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207) at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394) at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:577) at javax.servlet.GenericServlet.init(GenericServlet.java:244) at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:643) at org.eclipse.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:499) at org.eclipse.jetty.servlet.ServletHolder.ensureInstance(ServletHolder.java:791) at org.eclipse.jetty.servlet.ServletHolder.prepare(ServletHolder.java:776) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:579) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ... 65 more 2019-06-18 15:35:01,950 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/ java.lang.NullPointerException at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189) at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) 2019-06-18 15:39:17,698 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 56 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 22 2019-06-18 15:39:25,202 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/ java.lang.NullPointerException at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189) at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) 2019-06-18 15:39:45,858 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/ java.lang.NullPointerException at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189) at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) ``` 附datanode日志: 2019-06-18 14:52:36,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = gx-virtual-machine/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.2.0 STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-kms-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.0.jar STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 2019-01-08T06:08Z STARTUP_MSG: java = 11.0.3 ************************************************************/ 2019-06-18 14:52:36,863 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 2019-06-18 14:52:41,503 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:42,424 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2019-06-18 14:52:46,504 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2019-06-18 14:52:46,511 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576 2019-06-18 14:52:46,566 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is gx-virtual-machine 2019-06-18 14:52:46,567 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2019-06-18 14:52:46,592 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0 2019-06-18 14:52:46,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:9866 2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwidth is 10485760 bytes/s 2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 50 2019-06-18 14:52:47,198 INFO org.eclipse.jetty.util.log: Logging initialized @15269ms 2019-06-18 14:52:48,022 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2019-06-18 14:52:48,062 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined 2019-06-18 14:52:48,161 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2019-06-18 14:52:48,556 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 44121 2019-06-18 14:52:48,580 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-06T01:11:56+08:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827 2019-06-18 14:52:49,011 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7876d598{/logs,file:///usr/local/hadoop/logs/,AVAILABLE} 2019-06-18 14:52:49,018 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@5af28b27{/static,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2019-06-18 14:52:50,151 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@547e29a4{/,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{/datanode} 2019-06-18 14:52:50,242 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@6f45a1a0{HTTP/1.1,[http/1.1]}{localhost:44121} 2019-06-18 14:52:50,243 INFO org.eclipse.jetty.server.Server: Started @18329ms 2019-06-18 14:52:52,165 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:9864 2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hadoop 2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup 2019-06-18 14:52:52,242 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor 2019-06-18 14:52:52,720 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 2019-06-18 14:52:52,880 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9867 2019-06-18 14:52:54,839 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:9867 2019-06-18 14:52:55,160 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null 2019-06-18 14:52:55,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default> 2019-06-18 14:52:55,418 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service 2019-06-18 14:52:55,532 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2019-06-18 14:52:55,561 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9867: starting 2019-06-18 14:52:58,314 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 2019-06-18 14:52:58,329 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1) 2019-06-18 14:52:58,458 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 55815@gx-virtual-machine 2019-06-18 14:52:58,478 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory with location [DISK]file:/usr/local/hadoop/tmp/dfs/data is not formatted for namespace 317473294. Formatting... 2019-06-18 14:52:58,479 INFO org.apache.hadoop.hdfs.server.common.Storage: Generated new storageID DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e for directory /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:58,749 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:52:58,750 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Block pool storage directory for location [DISK]file:/usr/local/hadoop/tmp/dfs/data and block pool id BP-200946205-127.0.1.1-1560840480894 is not formatted. Formatting ... 2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-200946205-127.0.1.1-1560840480894 directory /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current 2019-06-18 14:52:58,772 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=317473294;bpid=BP-200946205-127.0.1.1-1560840480894;lv=-57;nsInfo=lv=-65;cid=CID-eb45654d-0bc6-4348-b02f-e03603e1ae37;nsid=317473294;c=1560840480894;bpid=BP-200946205-127.0.1.1-1560840480894;dnuuid=null 2019-06-18 14:52:58,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6a2049c6-1a18-437a-97bd-51c5bb65a639 2019-06-18 14:52:59,549 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e 2019-06-18 14:52:59,553 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - [DISK]file:/usr/local/hadoop/tmp/dfs/data, StorageType: DISK 2019-06-18 14:52:59,615 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean 2019-06-18 14:52:59,680 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:59,801 INFO org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker: Scheduled health check for volume /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:59,809 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:52:59,839 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data... 2019-06-18 14:53:00,166 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-200946205-127.0.1.1-1560840480894 on /usr/local/hadoop/tmp/dfs/data: 327ms 2019-06-18 14:53:00,168 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-200946205-127.0.1.1-1560840480894: 359ms 2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data... 2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice: Replica Cache file: /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current/replicas doesn't exist 2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data: 17ms 2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-200946205-127.0.1.1-1560840480894: 27ms 2019-06-18 14:53:00,208 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:53:00,221 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): finished scanning block pool BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:53:00,401 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): no suitable block pools found to scan. Waiting 1814399799 ms. 2019-06-18 14:53:00,418 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 2019/6/18 下午8:05 with interval of 21600000ms 2019-06-18 14:53:00,463 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 beginning handshake with NN 2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 successfully registered with NN 2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/127.0.0.1:9000 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000 2019-06-18 14:53:01,524 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb210af820fa10abf, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 19 msec to generate and 231 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2019-06-18 14:53:01,525 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-200946205-127.0.1.1-1560840480894 2019-06-18 15:44:37,567 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001 src: /127.0.0.1:34774 dest: /127.0.0.1:9866 2019-06-18 15:44:37,733 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34774, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, duration(ns): 75831098 2019-06-18 15:44:37,737 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,256 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002 src: /127.0.0.1:34776 dest: /127.0.0.1:9866 2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34776, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, duration(ns): 5252820 2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,340 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003 src: /127.0.0.1:34778 dest: /127.0.0.1:9866 2019-06-18 15:44:38,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34778, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, duration(ns): 19816531 2019-06-18 15:44:38,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,428 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004 src: /127.0.0.1:34780 dest: /127.0.0.1:9866 2019-06-18 15:44:38,455 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34780, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, duration(ns): 9820674 2019-06-18 15:44:38,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005 src: /127.0.0.1:34782 dest: /127.0.0.1:9866 2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34782, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, duration(ns): 9424051 2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,569 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006 src: /127.0.0.1:34784 dest: /127.0.0.1:9866 2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34784, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, duration(ns): 6662498 2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,642 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007 src: /127.0.0.1:34786 dest: /127.0.0.1:9866 2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34786, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, duration(ns): 5047916 2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,713 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008 src: /127.0.0.1:34788 dest: /127.0.0.1:9866 2019-06-18 15:44:38,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34788, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, duration(ns): 8532382 2019-06-18 15:44:38,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009 src: /127.0.0.1:34790 dest: /127.0.0.1:9866 2019-06-18 15:44:38,807 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34790, dest: /127.0.0.1:9866, bytes: 690, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, duration(ns): 5589094 2019-06-18 15:44:38,813 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:01,961 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010 src: /127.0.0.1:36578 dest: /127.0.0.1:9866 2019-06-19 09:54:02,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36578, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, duration(ns): 32739756 2019-06-19 09:54:02,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,125 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011 src: /127.0.0.1:36580 dest: /127.0.0.1:9866 2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36580, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, duration(ns): 12137675 2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012 src: /127.0.0.1:36582 dest: /127.0.0.1:9866 2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36582, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, duration(ns): 8740891 2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,307 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013 src: /127.0.0.1:36584 dest: /127.0.0.1:9866 2019-06-19 09:54:02,322 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36584, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, duration(ns): 8680367 2019-06-19 09:54:02,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014 src: /127.0.0.1:36586 dest: /127.0.0.1:9866 2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36586, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, duration(ns): 8474258 2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015 src: /127.0.0.1:36588 dest: /127.0.0.1:9866 2019-06-19 09:54:02,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36588, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, duration(ns): 6946259 2019-06-19 09:54:02,503 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016 src: /127.0.0.1:36590 dest: /127.0.0.1:9866 2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36590, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, duration(ns): 6602106 2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,635 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017 src: /127.0.0.1:36592 dest: /127.0.0.1:9866 2019-06-19 09:54:02,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36592, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017, duration(ns): 9690339 2019-06-19 09:54:02,654 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
hadoop集群中,有一个节点Jps看不到datanode ,但在web UI上正常,这是什么 原因 怎么解决?
问题如标题 hadoop dfsadmin -report 的结果如下 ``` DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 19/05/30 09:31:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 151164198912 (140.78 GB) Present Capacity: 135626715136 (126.31 GB) DFS Remaining: 135626620928 (126.31 GB) DFS Used: 94208 (92 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (3): Name: 192.168.182.101:50010 (master) Hostname: master Decommission Status : Normal Configured Capacity: 50388066304 (46.93 GB) DFS Used: 32768 (32 KB) Non DFS Used: 2861187072 (2.66 GB) DFS Remaining: 44960407552 (41.87 GB) DFS Used%: 0.00% DFS Remaining%: 89.23% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 30 09:31:11 CST 2019 Name: 192.168.182.102:50010 (node01) Hostname: node01 Decommission Status : Normal Configured Capacity: 50388066304 (46.93 GB) DFS Used: 28672 (28 KB) Non DFS Used: 2534232064 (2.36 GB) DFS Remaining: 45287366656 (42.18 GB) DFS Used%: 0.00% DFS Remaining%: 89.88% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 30 09:31:09 CST 2019 Name: 192.168.182.103:50010 (node02) Hostname: node02 Decommission Status : Normal Configured Capacity: 50388066304 (46.93 GB) DFS Used: 32768 (32 KB) Non DFS Used: 2442747904 (2.27 GB) DFS Remaining: 45378846720 (42.26 GB) DFS Used%: 0.00% DFS Remaining%: 90.06% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 30 09:31:11 CST 2019 ``` 以下是三个节点的jps结果 ![master](https://img-ask.csdn.net/upload/201905/30/1559180151_866479.png) 就这台node01节点jps不到datanode ![node01](https://img-ask.csdn.net/upload/201905/30/1559180180_666279.png) | ![node02](https://img-ask.csdn.net/upload/201905/30/1559180191_598391.png) 但在Web UI上又能看到 这个有谁了解吗 什么原因 有解决方案吗 ![web UI](https://img-ask.csdn.net/upload/201905/30/1559180236_15578.png)
hadoop集群启动后namenode自动关闭
2017-09-05 10:14:17,973 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON, in safe mode extension. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds. 2017-09-05 10:14:23,736 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.updateBlockForPipeline from 172.28.14.61:41497 Call#164039 Retry#12 org.apache.hadoop.ipc.RetriableException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot get a new generation stamp and an access token for block BP-1552766309-172.28.41.193-1503397713205:blk_1073742745_1926. Name node is in safe mode. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 4 seconds. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1331) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6234) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6309) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:806) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot get a new generation stamp and an access token for block BP-1552766309-172.28.41.193-1503397713205:blk_1073742745_1926. Name node is in safe mode. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 4 seconds. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327) ... 13 more 2017-09-05 10:14:27,976 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 55 secs 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 2 datanodes 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 190 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 3 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 1 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 29 msec 2017-09-05 10:14:59,141 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 0 2017-09-05 10:14:59,141 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 0 2017-09-05 10:14:59,143 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 1 2017-09-05 10:14:59,145 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 1 2017-09-05 10:14:59,185 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 1 2017-09-05 10:14:59,186 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 1 2017-09-05 10:16:50,848 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15839 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 88 18 2017-09-05 10:16:50,883 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 120 20 2017-09-05 10:16:50,910 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015839 -> /home/hadoop/hadoop_name/current/edits_0000000000000015839-0000000000000015841 2017-09-05 10:16:50,915 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15842 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15842 2017-09-05 10:18:51,194 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 19 8 2017-09-05 10:18:51,372 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 129 76 2017-09-05 10:18:51,405 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015842 -> /home/hadoop/hadoop_name/current/edits_0000000000000015842-0000000000000015843 2017-09-05 10:18:51,406 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15844 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15844 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 39 341 2017-09-05 10:20:52,258 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 103 413 2017-09-05 10:20:52,284 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015844 -> /home/hadoop/hadoop_name/current/edits_0000000000000015844-0000000000000015845 2017-09-05 10:20:52,284 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15846 报这样的错误是不是有问题
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Vue + Spring Boot 项目实战(十四):用户认证方案与完善的访问拦截
本篇文章主要讲解 token、session 等用户认证方案的区别并分析常见误区,以及如何通过前后端的配合实现完善的访问拦截,为下一步权限控制的实现打下基础。
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
漫话:什么是平衡(AVL)树?这应该是把AVL树讲的最好的文章了
这篇文章通过对话的形式,由浅入深带你读懂 AVL 树,看完让你保证理解 AVL 树的各种操作,如果觉得不错,别吝啬你的赞哦。 1、若它的左子树不为空,则左子树上所有的节点值都小于它的根节点值。 2、若它的右子树不为空,则右子树上所有的节点值均大于它的根节点值。 3、它的左右子树也分别可以充当为二叉查找树。 例如: 例如,我现在想要查找数值为14的节点。由于二叉查找树的特性,我们可...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
开源并不是你认为的那些事
点击上方蓝字 关注我们开源之道导读所以 ————想要理清开源是什么?先要厘清开源不是什么,名正言顺是句中国的古代成语,概念本身的理解非常之重要。大部分生物多样性的起源,...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
《C++ Primer》学习笔记(六):C++模块设计——函数
专栏C++学习笔记 《C++ Primer》学习笔记/习题答案 总目录 https://blog.csdn.net/TeFuirnever/article/details/100700212 —————————————————————————————————————————————————————— 《C++ Primer》习题参考答案:第6章 - C++模块设计——函数 文章目录专栏C+...
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法不过,当我看了源代码之后这程序不到50
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问