DataNode未报错直接关闭

启动得好好的,过一段时间就掉了。而且没有任何异常信息,这是为什么呢?
这是后面的一些日志,完全没有抛出异常。

2019-11-02 16:13:13,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 to 172.31.19.252:50010 
2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 (numBytes=109043) to /172.31.19.252:50010
2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742042_1218 (numBytes=197986) to /172.31.19.252:50010
2019-11-02 16:13:16,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010
2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010
2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010 of size 58160
2019-11-02 16:13:16,985 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010 of size 2178774
2019-11-02 16:13:16,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 to 172.31.19.252:50010 
2019-11-02 16:13:16,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 to 172.31.19.252:50010 
2019-11-02 16:13:17,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 (numBytes=34604) to /172.31.19.252:50010
2019-11-02 16:13:17,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 (numBytes=780664) to /172.31.19.252:50010
2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 to 172.31.19.252:50010 
2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 to 172.31.19.252:50010 
2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 (numBytes=6052) to /172.31.19.252:50010
2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 (numBytes=592319) to /172.31.19.252:50010
2019-11-02 16:13:44,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 src: /172.31.20.57:51732 dest: /172.31.23.3:50010
2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51732, dest: /172.31.23.3:50010, bytes: 1108073, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, duration(ns): 9331035
2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:44,223 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 src: /172.31.20.57:51736 dest: /172.31.23.3:50010
2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51736, dest: /172.31.23.3:50010, bytes: 20744, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, duration(ns): 822959
2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:44,240 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 src: /172.31.20.57:51738 dest: /172.31.23.3:50010
2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51738, dest: /172.31.23.3:50010, bytes: 53464, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, duration(ns): 834208
2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:44,250 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 src: /172.31.20.57:51740 dest: /172.31.23.3:50010
2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51740, dest: /172.31.23.3:50010, bytes: 60686, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, duration(ns): 836219
2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:45,139 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 src: /172.31.20.57:51748 dest: /172.31.23.3:50010
2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51748, dest: /172.31.23.3:50010, bytes: 914311, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, duration(ns): 7451340
2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:45,179 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 src: /172.31.20.57:51752 dest: /172.31.23.3:50010
2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51752, dest: /172.31.23.3:50010, bytes: 706710, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, duration(ns): 2666689
2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:45,192 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 src: /172.31.20.57:51754 dest: /172.31.23.3:50010
2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51754, dest: /172.31.23.3:50010, bytes: 186260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, duration(ns): 1335836
2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:45,617 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 src: /172.31.20.57:51756 dest: /172.31.23.3:50010
2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51756, dest: /172.31.23.3:50010, bytes: 1768012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, duration(ns): 8602898
2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:46,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010
2019-11-02 16:13:46,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010 of size 205389
2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 to 172.31.19.252:50010 
2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 to 172.31.19.252:50010 
2019-11-02 16:13:47,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 (numBytes=20744) to /172.31.19.252:50010
2019-11-02 16:13:47,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 (numBytes=1108073) to /172.31.19.252:50010
2019-11-02 16:13:47,315 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249 src: /172.31.20.57:51766 dest: /172.31.23.3:50010
2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51766, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, duration(ns): 3408777
2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:47,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 src: /172.31.20.57:51768 dest: /172.31.23.3:50010
2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51768, dest: /172.31.23.3:50010, bytes: 36519, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, duration(ns): 1284246
2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:47,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 src: /172.31.20.57:51776 dest: /172.31.23.3:50010
2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51776, dest: /172.31.23.3:50010, bytes: 279012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, duration(ns): 2573122
2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:47,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 src: /172.31.20.57:51778 dest: /172.31.23.3:50010
2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51778, dest: /172.31.23.3:50010, bytes: 1344870, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, duration(ns): 3770082
2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:48,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 src: /172.31.20.57:51780 dest: /172.31.23.3:50010
2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51780, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, duration(ns): 2365213
2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:48,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257 src: /172.31.20.57:51782 dest: /172.31.23.3:50010
2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51782, dest: /172.31.23.3:50010, bytes: 99555, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, duration(ns): 1140563
2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:49,062 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259 src: /172.31.20.57:51786 dest: /172.31.23.3:50010
2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51786, dest: /172.31.23.3:50010, bytes: 20998, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, duration(ns): 823110
2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:49,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262 src: /172.31.20.57:51792 dest: /172.31.23.3:50010
2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51792, dest: /172.31.23.3:50010, bytes: 224277, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, duration(ns): 1129868
2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:49,511 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263 src: /172.31.20.57:51794 dest: /172.31.23.3:50010
2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51794, dest: /172.31.23.3:50010, bytes: 780664, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, duration(ns): 2377601
2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:49,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010
2019-11-02 16:13:49,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010
2019-11-02 16:13:49,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010 of size 232248
2019-11-02 16:13:49,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010 of size 434678
2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 to 172.31.19.252:50010 
2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 to 172.31.19.252:50010 
2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073 for deletion
2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 (numBytes=53464) to /172.31.19.252:50010
2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073
2019-11-02 16:13:50,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 (numBytes=60686) to /172.31.19.252:50010
2019-11-02 16:13:51,310 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269 src: /172.31.19.252:46180 dest: /172.31.23.3:50010
2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.19.252:46180, dest: /172.31.23.3:50010, bytes: 94, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, duration(ns): 2826729
2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, type=LAST_IN_PIPELINE terminating
2019-11-02 16:13:52,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010
2019-11-02 16:13:52,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010
2019-11-02 16:13:52,986 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010 of size 1033299
2019-11-02 16:13:52,987 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010 of size 892808
2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 to 172.31.19.252:50010 
2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 to 172.31.19.252:50010 
2019-11-02 16:13:53,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 (numBytes=914311) to /172.31.19.252:50010
2019-11-02 16:13:53,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 (numBytes=706710) to /172.31.19.252:50010
2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 to 172.31.19.252:50010 
2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 to 172.31.19.252:50010 
2019-11-02 16:13:56,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 (numBytes=186260) to /172.31.19.252:50010
2019-11-02 16:13:56,025 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010
2019-11-02 16:13:56,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010
2019-11-02 16:13:56,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010 of size 36455
2019-11-02 16:13:56,040 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010 of size 1801469
2019-11-02 16:13:56,068 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 (numBytes=1768012) to /172.31.19.252:50010
2019-11-02 16:13:58,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010
2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010
2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010 of size 19827
2019-11-02 16:13:58,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 to 172.31.19.252:50010 
2019-11-02 16:13:59,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 (numBytes=36519) to /172.31.19.252:50010
2019-11-02 16:13:59,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010 of size 267634
2019-11-02 16:14:01,833 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273 src: /172.31.23.3:50512 dest: /172.31.23.3:50010
2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.23.3:50512, dest: /172.31.23.3:50010, bytes: 1029, op: HDFS_WRITE, cliID: DFSClient_attempt_1572710114754_0009_m_000000_0_-2142389405_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, duration(ns): 3798130
2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, type=LAST_IN_PIPELINE terminating
2019-11-02 16:14:01,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010
2019-11-02 16:14:01,994 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010 of size 375618
2019-11-02 16:14:01,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010
2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 to 172.31.19.252:50010 
2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010 of size 1765905
2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 to 172.31.19.252:50010 
2019-11-02 16:14:02,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 (numBytes=279012) to /172.31.19.252:50010
2019-11-02 16:14:02,008 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 (numBytes=1344870) to /172.31.19.252:50010
2019-11-02 16:14:04,999 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 because on-disk length 990927 is shorter than NameNode recorded length 9223372036854775807
2019-11-02 16:14:08,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 to 172.31.19.252:50010 
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 to 172.31.19.252:50010 
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068 for deletion
2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069 for deletion
2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070 for deletion
2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081
2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071 for deletion
2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072 for deletion
2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083
2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074 for deletion
2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075 for deletion
2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076 for deletion
2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086
2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078 for deletion
2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079 for deletion
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 (numBytes=375618) to /172.31.19.252:50010
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 (numBytes=36455) to /172.31.19.252:50010
2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060
2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061
2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062
2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063
2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064
2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065
2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066
2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067
2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068
2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069
2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070
2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071
2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072
2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074
2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075
2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076
2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078
2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079
2019-11-02 16:14:11,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 to 172.31.19.252:50010 
2019-11-02 16:14:11,007 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 (numBytes=25496) to /172.31.19.252:50010
2019-11-02 17:01:35,904 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-793432708-172.31.20.57-1572709584342 Total blocks: 88, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x15733bb21ccd9a44,  containing 1 storage report(s), of which we sent 1. The reports had 88 total blocks and used 1 RPC(s). This took 1 msec to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-793432708-172.31.20.57-1572709584342

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hadoop datanode日志报错
2016-04-10 11:35:08,998 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService java.io.EOFException: End of File Exception between local host is: "master/10.13.6.186"; destination host is: "master":9000; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
hadoop datanode节点 不能启动
每次启动前都要删除current/VERSION 否则datanode节点不能启动,报错: ![图片说明](https://img-ask.csdn.net/upload/201801/08/1515411908_628664.png)
分布式无法在其他两机器启动datanode
各位大神: 我最近自学大数据Hadoop,在虚拟机上装了三个机器,进行分布式链接,链接过后start-all.sh,第一台机器启动了Datanode,另外两台机器没有启动,我上网搜寻了相关错误类型,并没有解决,clusterID我已经都调试过,并没有用。想问各位大佬怎么办,附一张第三台机器的datanode启动失败日志。 ![图片说明](https://img-ask.csdn.net/upload/202001/27/1580054815_552009.png)
hadoop的DataNode节点的问题
Centos7为什么每次启动hadoop时,jps查看DataNode都没有,要删除目录在重建,并重新格式化才有DataNode。文件和hadoop安装包删了重建了并格式化了还是这样。环境,配置什么的都OK的
hbase 的datanode老师挂掉,求解决
hbase启动之后datanode就挂掉,而且我的hbase启动之后无法新创建pid文件,因此无法关闭,启动完hbase之后 ,使用hbase shell之后用status 会报错![图片说明](https://img-ask.csdn.net/upload/201712/07/1512636854_641235.png)
cdh5.2 namenode格式化后,报datanode版本不一致,改好版本
cdh5.2 namenode格式化后,报datanode版本不一致,改好版本,namenodf和datanode都启动起来了,hbase master也启动起来了,期间删除zookeeper-client中的hbase,想问为什么hbase中list没有数据呀,该怎么解决。
找不到DataNode,DataNode已启动,请各位大神帮忙
在master上用start-all.sh启动,在slave中可以看见DataNode以及sorcemanager,但是在web端无法查看到DataNode节点,并且在文件-put时出错,也是提示数据节点为0。请各位大神帮忙
有无大神帮忙看hadoop无法启动DataNode
************************************************************/ 2019-04-04 09:44:42,114 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 2019-04-04 09:44:46,654 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/opt/hdfs/data 2019-04-04 09:44:47,320 WARN org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/opt/hdfs/data java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat; at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:451) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:796) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:710) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678) at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233) at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141) at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116) at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239) at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52) at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) 2019-04-04 09:44:47,379 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0 at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2776) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2691) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2733) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2877) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2901) 2019-04-04 09:44:47,499 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0 2019-04-04 09:44:47,659 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at master/192.168.236.128
eclipse连接hadoop报错Could not obtain block
1.hadoop2.7.7版本 2报错截图![图片说明](https://img-ask.csdn.net/upload/201901/23/1548209349_373821.png) 3.没有出现挂datanode的问题[图片] ![图片说明](https://img-ask.csdn.net/upload/201901/23/1548209377_858284.png) 4.可以从web端正常访问和下载
hadoop2.6.5集群master启动时只能启动自身作为datanode,slave节点无法控制且没有日志?
1个master,2个slave,多次格式化删除/usr/local/hadoop/logs与/tmp无效 报错日志显示: 2019-05-22 15:43:46,212 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: Master/219.226.109.130:9000 但是所有节点防火墙均为关闭状态 /etc/hosts中配置均为: #127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 219.226.109.130 Master 219.226.109.129 Slave1 219.226.109.131 Slave2
Docker容器中的centos7上安装ambari:启动datanode时Permission denied
我在Docker容器中的centos7上安装ambari,然后部署hadoop集群,在启动DataNode的时候遇到了这样的报错信息 “Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. su: cannot open session: Permission denied” 是切换到hdfs用户(su hdfs)的时候,被Permission denied,请问大家有办法可以解决吗?非常感谢!!! ![图片说明](https://img-ask.csdn.net/upload/201904/29/1556540883_509077.jpg)
hadoop通过虚拟机部署为分布式,datanode连接不上namenode
使用hdfs namenode -format 进行namenode节点格式化,然后把配置好的hadoop发到其他两个虚拟机。 core-site.xml的配置:(fs.defaultFS配置的value是namenode节点的地址,三台都是如此) ``` <property> <name>fs.defaultFS</name> <value>hdfs://192.168.216.201:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/mym/hadoop/hadoop-2.4.1/tmp</value> </property> ``` 启动namenode然后启动datanode,通过后台没有找到datanode,查看datanode的日志如下: 2018-01-29 15:06:02,528 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to mym/192.168.216.201:9000 starting to offer service 2018-01-29 15:06:02,548 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-01-29 15:06:02,574 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 2018-01-29 15:06:02,726 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:07,730 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:12,738 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:17,741 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:22,751 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:27,754 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:32,760 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:37,765 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:42,780 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:47,788 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:52,793 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:06:57,799 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 2018-01-29 15:07:02,804 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: mym/192.168.216.201:9000 查看网络: namenode机器上: ``` [mym@mym hadoop]$ netstat -an | grep 9000 tcp 0 0 192.168.216.201:9000 0.0.0.0:* LISTEN tcp 0 0 192.168.216.201:9000 192.168.216.202:46604 ESTABLISHED ``` 其中一台datanode机器: ``` [mini2@mini2 sbin]$ netstat -an | grep 9000 tcp 0 0 192.168.216.202:46604 192.168.216.201:9000 ESTABLISHED ``` ----------------------------------- 已尝试的解决方法: 1.配置三台机器的hosts文件,且删除了回环地址。此时重新格式化namenode再进行测试。结果:没有解决 2.防火墙以及firewall都关了,用telnet 192.168.216.201 9000 也可以连接。仍然无效。 注: 1.datanode启动后没有生成current文件。namenode生成了current文件 2.使用的版本是2.4.1 3.使用jps分别查看namenode和datanode都可以看到启动了(估计datanode是启动失败的) 4.在namenode机器上启动datanode。后台可以查看到datanode 5.三台机器都配置了域名,且都能互相ping通 请求帮助
为何master节点会出现在datanode中????
三个节点启动好几个小时了,突然发现一个slave2挂了,但我在slave2上,用JPS看该有的都有。去网上看到说可以用hadoop-daemon.sh start datanode启动datanode。再用master:50070看,发现datanode里面出现了master。。。 什么情况???求解答
hadoop datanode链接namenode问题
datanode——slave1连接不上namenode 2016-04-10 20:45:04,761 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.10:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) master和slave1的hosts文件都是如下: 127.0.0.1 localhost 192.168.1.10 master 192.168.1.11 slave1
Hadoop 2.7.3完全分布模式下datanode启动不起来,求支招..
core-site.xml配置: <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://s0:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/moclick/hadoop/tmp</value> </property> </configuration> </configuration> hdfs-site.xml配置: <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>s3:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/moclick/hadoop/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/moclick/hadoop/hdfs/data</value> </property> </configuration> datanode节点异常信息: 2017-10-22 11:32:59,209 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. at org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:875) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:155) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1129) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509) 2017-10-22 11:32:59,216 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2017-10-22 11:32:59,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at s1/192.168.1.111 ************************************************************/ 百度过方法了,实在找不到解决办法...望大佬帮忙看看~
hadoop2.x集群部署一种一个datanode无法启动
Exception in secureMain java.net.UnknownHostException: node1: node1 at java.net.InetAddress.getLocalHost(InetAddress.java:1473) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:187) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:207) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2153) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402) Caused by: java.net.UnknownHostException: node1 at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 6 more 2015-01-16 09:08:54,152 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-01-16 09:08:54,164 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: node1: node1 ************************************************************/ 环境ubuntu,hadoop2.6,jdk7 [排比句](http://www.zaojuzi.com/paibiju/ "")部署三台虚拟机一台namenode,两台datanode;/etc/hostname 都已经配置分布为master,node1,node2 /etc/hosts配置为: 27.0.0.1 localhost 127.0.1.1 ubuntu.localdomain ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.184.129 master 192.168.184.130 node1 192.168.184.131 node2 hadoop/etc/hadoo/slaves配置为[造句](http://www.zaojuzi.com/ ""): node1 node2 core-site.xml配置为: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yangwq/hadoop-2.6.0/temp</value> <description>A base for other temporary directories.</description> </property> </configuration> hdfs-site.xml配置为: <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/data</value> </property> </configuration> mapred-site.xml配置为: <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> </configuration> yarn-site.xml配置为: <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!-- resourcemanager hostname或ip地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> </configuration> 在启动的时候node1节点的datanode一直无法启动,同时通过ssh登录各节点都是正常。
hadoop集群datanode启动成功,jps中无datanode
这是master中jps的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635422_43286.png) 这个是slave中jps的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635450_918763.png) 这个是50070页面 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635471_368841.png) hadoop dfsadmin -report的结果 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635512_251415.png) master和slave的VERSION信息 ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635571_921496.png) ![图片说明](https://img-ask.csdn.net/upload/201804/02/1522635586_259196.png) 有没有大神知道这是什么问题
hadoop集群启动后网页上看不到datanode节点信息
![图片说明](https://img-ask.csdn.net/upload/201805/15/1526360930_126049.png) node82作为namenode节点,node81,node80,node79作为datanode,jps显示都是启动的,可以登陆网页却看不到datanode节点信息。 ![图片说明](https://img-ask.csdn.net/upload/201805/15/1526361082_15795.png) ![图片说明](https://img-ask.csdn.net/upload/201805/15/1526361093_714827.png)
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
MyBatis研习录(01)——MyBatis概述与入门
MyBatis 是一款优秀的持久层框架,它支持定制化 SQL、存储过程以及高级映射。MyBatis原本是apache的一个开源项目iBatis, 2010年该项目由apache software foundation 迁移到了google code并改名为MyBatis 。2013年11月MyBatis又迁移到Github。
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
Python爬虫精简步骤1 获取数据
爬虫,从本质上来说,就是利用程序在网上拿到对我们有价值的数据。 爬虫能做很多事,能做商业分析,也能做生活助手,比如:分析北京近两年二手房成交均价是多少?广州的Python工程师平均薪资是多少?北京哪家餐厅粤菜最好吃?等等。 这是个人利用爬虫所做到的事情,而公司,同样可以利用爬虫来实现巨大的商业价值。比如你所熟悉的搜索引擎——百度和谷歌,它们的核心技术之一也是爬虫,而且是超级爬虫。 从搜索巨头到人工...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
web前端javascript+jquery知识点总结
1.Javascript 语法.用途 javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
渗透测试-灰鸽子远控木马
木马概述 灰鸽子( Huigezi),原本该软件适用于公司和家庭管理,其功能十分强大,不但能监视摄像头、键盘记录、监控桌面、文件操作等。还提供了黑客专用功能,如:伪装系统图标、随意更换启动项名称和表述、随意更换端口、运行后自删除、毫无提示安装等,并采用反弹链接这种缺陷设计,使得使用者拥有最高权限,一经破解即无法控制。最终导致被黑客恶意使用。原作者的灰鸽子被定义为是一款集多种控制方式于一体的木马程序...
Python:爬取疫情每日数据
前言 目前每天各大平台,如腾讯、今日头条都会更新疫情每日数据,他们的数据源都是一样的,主要都是通过各地的卫健委官网通报。 以全国、湖北和上海为例,分别为以下三个网站: 国家卫健委官网:http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml 湖北卫健委官网:http://wjw.hubei.gov.cn/bmdt/ztzl/fkxxgzbdgrfyyq/xxfb...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允许使用这...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
粒子群算法求解物流配送路线问题(python)
1.Matlab实现粒子群算法的程序代码:https://www.cnblogs.com/kexinxin/p/9858664.html matlab代码求解函数最优值:https://blog.csdn.net/zyqblog/article/details/80829043 讲解通俗易懂,有数学实例的博文:https://blog.csdn.net/daaikuaichuan/article/...
教你如何编写第一个简单的爬虫
很多人知道爬虫,也很想利用爬虫去爬取自己想要的数据,那么爬虫到底怎么用呢?今天就教大家编写一个简单的爬虫。 下面以爬取笔者的个人博客网站为例获取第一篇文章的标题名称,教大家学会一个简单的爬虫。 第一步:获取页面 #!/usr/bin/python # coding: utf-8 import requests #引入包requests link = "http://www.santostang....
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
情人节来了,教你个用 Python 表白的技巧
作者:@明哥 公众号:Python编程时光 2020年,这个看起来如此浪漫的年份,你还是一个人吗? 难不成我还能是一条狗? 18年的时候,写过一篇介绍如何使用 Python 来表白的文章。 虽然创意和使用效果都不错,但有一缺点,这是那个exe文件,女神需要打开电脑,才有可能参与进来,进而被你成功"调戏”。 由于是很早期的文章了,应该有很多人没有看过。 没有看过的,你可以点击这里查看:用Pyt...
相关热词 c#导入fbx c#中屏蔽键盘某个键 c#正态概率密度 c#和数据库登陆界面设计 c# 高斯消去法 c# codedom c#读取cad文件文本 c# 控制全局鼠标移动 c# temp 目录 bytes初始化 c#
立即提问