nfs挂载hdfs报错:mount.nfs3 mount system call failed

作为一只菜鸟,为了能达到hdfs映射到linux本地文件系统中的目的,参照apache官网的文档一步一步操作下来https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html 中间都没有遇到什么问题
nfs3和portmap都启动了,防火墙也关掉了

 [root@Master hadoop]# rpcinfo -p Master.dev
   program vers proto   port  service
    100005    3   udp   4242  mountd
    100005    1   tcp   4242  mountd
    100000    2   udp    111  portmapper
    100000    2   tcp    111  portmapper
    100005    3   tcp   4242  mountd
    100005    2   tcp   4242  mountd
    100003    3   tcp   2049  nfs
    100005    2   udp   4242  mountd
    100005    1   udp   4242  mountd
 [root@Master hadoop]# showmount -e Master.dev
Export list for Master.dev:
/ *

执行到最后一步的时候就报错mount.nfs3 mount system call failed了

mount -t nfs -o nfsvers=3,vers=3,proto=tcp,nolock Master.dev:/ /data/hdfs/ 

已经琢磨了两天了。。还是解决不了。。。求各位指点呀T.T

1个回答

你安装的NFS版本是多少,你通过参数指定了nfsvers=3,请确定参数的版本匹配。或者参照官网上的,使用

mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync $server:/  $mount_point
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hadoop升级2.2.0后运行job报错Shell$ExitCodeException: id: dr.who: No such user
2013-12-03 11:34:56,590 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user dr.who 2013-12-03 11:34:56,589 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user dr.who org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:504) at org.apache.hadoop.util.Shell.run(Shell.java:417) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636) at org.apache.hadoop.util.Shell.execCommand(Shell.java:725) at org.apache.hadoop.util.Shell.execCommand(Shell.java:708) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52) at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50) at org.apache.hadoop.security.Groups.getGroups(Groups.java:95) at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:63) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1310) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
File not found: File does not exist: reduce.xml
用hive往elasticsearch中导数据出错(用的命令是INSERT OVERWRITE TABLE doc SELECT s.id,s.name FROM user_f s;) 错误代码如下 16/03/24 13:22:54 [main]: INFO exec.Utilities: File not found: File does not exist: /tmp/hive/hadoop/cf07a2cb-f401-440b-b230-3adb69d7ce9a/hive_2016-03-24_13-22-52_349_3866738858790764474-1/-mr-10001/4166b8bf-0706-4bda-9912-3cab4e82bcde/reduce.xml at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71) at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1828) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1712) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:587) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) 16/03/24 13:22:54 [main]: INFO exec.Utilities: No plan file found: hdfs://ubuntu:9000/tmp/hive/hadoop/cf07a2cb-f401-440b-b230-3adb69d7ce9a/hive_2016-03-24_13-22-52_349_3866738858790764474-1/-mr-10001/4166b8bf-0706-4bda-9912-3cab4e82bcde/reduce.xml
hadoop机架感知不能加载类
我已经在core-site.xml配置文件中配置了net.topology.node.switch.mapping.impl。并把jar包放在了/opt/modules/hadoop-2.7.2/share/hadoop/common/lib目录。启动后出现如下错误。hadoop哪个目录可以启动时加载jar包呢? java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class >com.learning.rackawareness.RackAwareness not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2227) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.<init>(DatanodeManager.java:208) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.<init>(BlockManager.java:268) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:737) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:246) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671) Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class >com.learning.rackawareness.RackAwareness not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2219) ... 6 more Caused by: java.lang.ClassNotFoundException: Class >com.learning.rackawareness.RackAwareness not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) ... 7 more 2019-10-07 15:23:41,958 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state 2019-10-07 15:23:41,970 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: SHUTDOWN_MSG:
hadoop配置zookeeper,启动的时候namenode节点日志有异常
hadoop搭建zookeeper,启动都正常,日志也没有报错,上传文件都好使,但是namenode有一个异常 2015-12-31 22:49:58,753 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [192.168.254.12:8485, 192.168.254.13:8485, 192.168.254.14:8485]. Skipping. org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown: 192.168.254.12:8485: Call From host5/192.168.254.15 to host2:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.254.14:8485: Call From host5/192.168.254.15 to host4:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.254.13:8485: Call From host5/192.168.254.15 to host3:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:460) at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:252) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1237) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1265) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1249) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:209) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:321) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292) 2015-12-31 22:49:58,900 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state 2015-12-31 22:49:58,900 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:334) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)
hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看日志,报错了
hadoop集群搭建完成,其他进程都启动了,但是namenode没有启动,查看namenode的日志信息,报错了, 192.168.100.70:8485: Call From anlulu-1/192.168.100.10 to anlulu-7:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.50:8485: Call From anlulu-1/192.168.100.10 to anlulu-5:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.100.60:8485: Call From anlulu-1/192.168.100.10 to anlulu-6:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:182) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436) at org.apache.hadoop.hdfs.server.namenode.JournalSet$7.apply(JournalSet.java:590) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:359) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:587) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1330) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1521) at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1399) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1160) at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107) at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 2015-08-20 00:41:50,962 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-08-20 00:41:50,966 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: ``` ```
DataNode未报错直接关闭
启动得好好的,过一段时间就掉了。而且没有任何异常信息,这是为什么呢? 这是后面的一些日志,完全没有抛出异常。 ``` 2019-11-02 16:13:13,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 to 172.31.19.252:50010 2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742045_1221 (numBytes=109043) to /172.31.19.252:50010 2019-11-02 16:13:14,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742042_1218 (numBytes=197986) to /172.31.19.252:50010 2019-11-02 16:13:16,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010 2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010 2019-11-02 16:13:16,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742051_1227 src: /172.31.19.252:46170 dest: /172.31.23.3:50010 of size 58160 2019-11-02 16:13:16,985 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742050_1226 src: /172.31.19.252:46171 dest: /172.31.23.3:50010 of size 2178774 2019-11-02 16:13:16,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 to 172.31.19.252:50010 2019-11-02 16:13:16,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 to 172.31.19.252:50010 2019-11-02 16:13:17,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742048_1224 (numBytes=34604) to /172.31.19.252:50010 2019-11-02 16:13:17,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742047_1223 (numBytes=780664) to /172.31.19.252:50010 2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 to 172.31.19.252:50010 2019-11-02 16:13:19,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 to 172.31.19.252:50010 2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742052_1228 (numBytes=6052) to /172.31.19.252:50010 2019-11-02 16:13:20,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742049_1225 (numBytes=592319) to /172.31.19.252:50010 2019-11-02 16:13:44,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 src: /172.31.20.57:51732 dest: /172.31.23.3:50010 2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51732, dest: /172.31.23.3:50010, bytes: 1108073, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, duration(ns): 9331035 2019-11-02 16:13:44,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,223 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 src: /172.31.20.57:51736 dest: /172.31.23.3:50010 2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51736, dest: /172.31.23.3:50010, bytes: 20744, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, duration(ns): 822959 2019-11-02 16:13:44,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,240 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 src: /172.31.20.57:51738 dest: /172.31.23.3:50010 2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51738, dest: /172.31.23.3:50010, bytes: 53464, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, duration(ns): 834208 2019-11-02 16:13:44,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:44,250 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 src: /172.31.20.57:51740 dest: /172.31.23.3:50010 2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51740, dest: /172.31.23.3:50010, bytes: 60686, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, duration(ns): 836219 2019-11-02 16:13:44,252 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,139 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 src: /172.31.20.57:51748 dest: /172.31.23.3:50010 2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51748, dest: /172.31.23.3:50010, bytes: 914311, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, duration(ns): 7451340 2019-11-02 16:13:45,147 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,179 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 src: /172.31.20.57:51752 dest: /172.31.23.3:50010 2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51752, dest: /172.31.23.3:50010, bytes: 706710, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, duration(ns): 2666689 2019-11-02 16:13:45,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,192 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 src: /172.31.20.57:51754 dest: /172.31.23.3:50010 2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51754, dest: /172.31.23.3:50010, bytes: 186260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, duration(ns): 1335836 2019-11-02 16:13:45,194 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:45,617 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 src: /172.31.20.57:51756 dest: /172.31.23.3:50010 2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51756, dest: /172.31.23.3:50010, bytes: 1768012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, duration(ns): 8602898 2019-11-02 16:13:45,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:46,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010 2019-11-02 16:13:46,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742057_1233 src: /172.31.19.252:46174 dest: /172.31.23.3:50010 of size 205389 2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 to 172.31.19.252:50010 2019-11-02 16:13:46,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 to 172.31.19.252:50010 2019-11-02 16:13:47,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742058_1234 (numBytes=20744) to /172.31.19.252:50010 2019-11-02 16:13:47,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742056_1232 (numBytes=1108073) to /172.31.19.252:50010 2019-11-02 16:13:47,315 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249 src: /172.31.20.57:51766 dest: /172.31.23.3:50010 2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51766, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, duration(ns): 3408777 2019-11-02 16:13:47,320 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742073_1249, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 src: /172.31.20.57:51768 dest: /172.31.23.3:50010 2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51768, dest: /172.31.23.3:50010, bytes: 36519, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, duration(ns): 1284246 2019-11-02 16:13:47,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 src: /172.31.20.57:51776 dest: /172.31.23.3:50010 2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51776, dest: /172.31.23.3:50010, bytes: 279012, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, duration(ns): 2573122 2019-11-02 16:13:47,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:47,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 src: /172.31.20.57:51778 dest: /172.31.23.3:50010 2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51778, dest: /172.31.23.3:50010, bytes: 1344870, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, duration(ns): 3770082 2019-11-02 16:13:47,812 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:48,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 src: /172.31.20.57:51780 dest: /172.31.23.3:50010 2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51780, dest: /172.31.23.3:50010, bytes: 990927, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, duration(ns): 2365213 2019-11-02 16:13:48,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:48,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257 src: /172.31.20.57:51782 dest: /172.31.23.3:50010 2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51782, dest: /172.31.23.3:50010, bytes: 99555, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, duration(ns): 1140563 2019-11-02 16:13:48,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742081_1257, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,062 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259 src: /172.31.20.57:51786 dest: /172.31.23.3:50010 2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51786, dest: /172.31.23.3:50010, bytes: 20998, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, duration(ns): 823110 2019-11-02 16:13:49,064 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742083_1259, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262 src: /172.31.20.57:51792 dest: /172.31.23.3:50010 2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51792, dest: /172.31.23.3:50010, bytes: 224277, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, duration(ns): 1129868 2019-11-02 16:13:49,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742086_1262, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,511 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263 src: /172.31.20.57:51794 dest: /172.31.23.3:50010 2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.20.57:51794, dest: /172.31.23.3:50010, bytes: 780664, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, duration(ns): 2377601 2019-11-02 16:13:49,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742087_1263, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:49,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010 2019-11-02 16:13:49,981 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010 2019-11-02 16:13:49,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742062_1238 src: /172.31.19.252:46177 dest: /172.31.23.3:50010 of size 232248 2019-11-02 16:13:49,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742061_1237 src: /172.31.19.252:46176 dest: /172.31.23.3:50010 of size 434678 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 to 172.31.19.252:50010 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 to 172.31.19.252:50010 2019-11-02 16:13:49,999 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073 for deletion 2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742059_1235 (numBytes=53464) to /172.31.19.252:50010 2019-11-02 16:13:50,000 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742073_1249 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742073 2019-11-02 16:13:50,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742060_1236 (numBytes=60686) to /172.31.19.252:50010 2019-11-02 16:13:51,310 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269 src: /172.31.19.252:46180 dest: /172.31.23.3:50010 2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.19.252:46180, dest: /172.31.23.3:50010, bytes: 94, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_803773611_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, duration(ns): 2826729 2019-11-02 16:13:51,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742093_1269, type=LAST_IN_PIPELINE terminating 2019-11-02 16:13:52,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010 2019-11-02 16:13:52,983 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010 2019-11-02 16:13:52,986 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742063_1239 src: /172.31.19.252:46190 dest: /172.31.23.3:50010 of size 1033299 2019-11-02 16:13:52,987 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742065_1241 src: /172.31.19.252:46192 dest: /172.31.23.3:50010 of size 892808 2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 to 172.31.19.252:50010 2019-11-02 16:13:52,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 to 172.31.19.252:50010 2019-11-02 16:13:53,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742064_1240 (numBytes=914311) to /172.31.19.252:50010 2019-11-02 16:13:53,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742066_1242 (numBytes=706710) to /172.31.19.252:50010 2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 to 172.31.19.252:50010 2019-11-02 16:13:55,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 to 172.31.19.252:50010 2019-11-02 16:13:56,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742067_1243 (numBytes=186260) to /172.31.19.252:50010 2019-11-02 16:13:56,025 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010 2019-11-02 16:13:56,026 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010 2019-11-02 16:13:56,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742070_1246 src: /172.31.19.252:46200 dest: /172.31.23.3:50010 of size 36455 2019-11-02 16:13:56,040 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742069_1245 src: /172.31.19.252:46198 dest: /172.31.23.3:50010 of size 1801469 2019-11-02 16:13:56,068 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742068_1244 (numBytes=1768012) to /172.31.19.252:50010 2019-11-02 16:13:58,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010 2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010 2019-11-02 16:13:58,996 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742072_1248 src: /172.31.19.252:46208 dest: /172.31.23.3:50010 of size 19827 2019-11-02 16:13:58,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 to 172.31.19.252:50010 2019-11-02 16:13:59,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742074_1250 (numBytes=36519) to /172.31.19.252:50010 2019-11-02 16:13:59,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742071_1247 src: /172.31.19.252:46206 dest: /172.31.23.3:50010 of size 267634 2019-11-02 16:14:01,833 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273 src: /172.31.23.3:50512 dest: /172.31.23.3:50010 2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.31.23.3:50512, dest: /172.31.23.3:50010, bytes: 1029, op: HDFS_WRITE, cliID: DFSClient_attempt_1572710114754_0009_m_000000_0_-2142389405_1, offset: 0, srvID: 77d8a096-eb31-4724-8d4a-187b54f9fe8d, blockid: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, duration(ns): 3798130 2019-11-02 16:14:01,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-793432708-172.31.20.57-1572709584342:blk_1073742097_1273, type=LAST_IN_PIPELINE terminating 2019-11-02 16:14:01,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010 2019-11-02 16:14:01,994 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742076_1252 src: /172.31.19.252:46218 dest: /172.31.23.3:50010 of size 375618 2019-11-02 16:14:01,995 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 to 172.31.19.252:50010 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received BP-793432708-172.31.20.57-1572709584342:blk_1073742075_1251 src: /172.31.19.252:46216 dest: /172.31.23.3:50010 of size 1765905 2019-11-02 16:14:01,999 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 to 172.31.19.252:50010 2019-11-02 16:14:02,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742078_1254 (numBytes=279012) to /172.31.19.252:50010 2019-11-02 16:14:02,008 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742079_1255 (numBytes=1344870) to /172.31.19.252:50010 2019-11-02 16:14:04,999 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block BP-793432708-172.31.20.57-1572709584342:blk_1073742080_1256 because on-disk length 990927 is shorter than NameNode recorded length 9223372036854775807 2019-11-02 16:14:08,005 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 to 172.31.19.252:50010 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 to 172.31.19.252:50010 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742080_1256 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742080 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068 for deletion 2019-11-02 16:14:08,006 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742081_1257 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742081 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072 for deletion 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742083_1259 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742083 2019-11-02 16:14:08,007 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742086_1262 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742086 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078 for deletion 2019-11-02 16:14:08,008 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079 for deletion 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742087_1263 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742087 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742093_1269 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir1/blk_1073742093 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741996_1172 (numBytes=375618) to /172.31.19.252:50010 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742056_1232 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742056 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742057_1233 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742057 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742058_1234 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742058 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742059_1235 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742059 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073741990_1166 (numBytes=36455) to /172.31.19.252:50010 2019-11-02 16:14:08,009 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742060_1236 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742060 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742061_1237 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742061 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742062_1238 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742062 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742063_1239 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742063 2019-11-02 16:14:08,010 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742064_1240 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742064 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742065_1241 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742065 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742066_1242 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742066 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742067_1243 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742067 2019-11-02 16:14:08,011 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742068_1244 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742068 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742069_1245 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742069 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742070_1246 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742070 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742071_1247 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742071 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742072_1248 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742072 2019-11-02 16:14:08,012 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742074_1250 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742074 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742075_1251 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742075 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742076_1252 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742076 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742078_1254 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742078 2019-11-02 16:14:08,013 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-793432708-172.31.20.57-1572709584342 blk_1073742079_1255 file /data/dn/current/BP-793432708-172.31.20.57-1572709584342/current/finalized/subdir0/subdir0/blk_1073742079 2019-11-02 16:14:11,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.31.23.3:50010, datanodeUuid=77d8a096-eb31-4724-8d4a-187b54f9fe8d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-d359683c-0d1c-450d-81d1-67712499ef0b;nsid=696889141;c=1572709584342) Starting thread to transfer BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 to 172.31.19.252:50010 2019-11-02 16:14:11,007 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at hadoop2:50010: Transmitted BP-793432708-172.31.20.57-1572709584342:blk_1073742004_1180 (numBytes=25496) to /172.31.19.252:50010 2019-11-02 17:01:35,904 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-793432708-172.31.20.57-1572709584342 Total blocks: 88, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0 2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x15733bb21ccd9a44, containing 1 storage report(s), of which we sent 1. The reports had 88 total blocks and used 1 RPC(s). This took 1 msec to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2019-11-02 19:52:23,330 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-793432708-172.31.20.57-1572709584342 ```
hadoop 运行异常,ReplicaNotFoundException
浏览线上运行日志,发现大量报错信息,截取一条,希望大虾能帮助解决。 May 5, 10:07:30.620 AM ERROR org.apache.hadoop.hdfs.server.datanode.DataNode hadoop-78:50010:DataXceiver error processing READ_BLOCK operation src: /192.0.0.78:34568 dst: /192.0.0.78:50010 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-381875526-172.18.50.76-1450327742712:blk_1075578327_1837535 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:450) at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:234) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:530) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:148) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244) at java.lang.Thread.run(Thread.java:745)
hadoop集群下 spark 启动报错
``` Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 17/09/29 09:24:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder': at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126) at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938) at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:938) at org.apache.spark.repl.Main$.createSparkSession(Main.scala:97) ... 47 elided Caused by: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) ; at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050) ... 61 more Caused by: java.lang.RuntimeException: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:191) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) ... 70 more Caused by: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970) at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047) at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036) at org.apache.hadoop.hive.ql.exec.Utilities.createDirsWithPermission(Utilities.java:3679) at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:597) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) ... 84 more Caused by: org.apache.hadoop.ipc.RemoteException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy22.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy23.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000) ... 94 more <console>:14: error: not found: value spark import spark.implicits._ ^ <console>:14: error: not found: value spark import spark.sql ^ Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.2.0 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144) Type in expressions to have them evaluated. Type :help for more information. scala> ```
求救啊!Hadoop 2.2.0 搭建集群 启动hdfs时候 namenode 启动后报空指针
日志如下: 2015-02-07 01:01:46,610 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring NN shutdown. Shutting down immediately. java.lang.NullPointerException at org.apache.hadoop.hdfs.DFSUtil.substituteForWildcardAddress(DFSUtil.java:942) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.getHttpAddress(StandbyCheckpointer.java:108) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.setNameNodeAddresses(StandbyCheckpointer.java:90) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.<init>(StandbyCheckpointer.java:76) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startStandbyServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startStandbyServices(NameNode.java:1456) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.enterState(StandbyState.java:58) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:686) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 2015-02-07 01:01:46,614 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-02-07 01:01:46,620 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 我就不明白了,为什么就一直报空指针,而且,远程调试的时候就不会报错,已经凌乱了。
hdfs append追加文件上传的问题
各位大神们,小弟设置了hadoop-0.20-cdh3u0版本的dfs.support.append为true,然后想测试下文件追加上传。第一次上传一个文件的前4096看,第二次上传其余部分。但是发现2次上传的文件大小之和小于总文件大小。我发现问题在于,当第二次上传时,hdfs会删除第一次的文件,然后重新建立新文件再上传,所以文件大小只有第二次上传的数据大小。 以下是hadoop的日志: 2012-07-10 15:00:04,363 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=dell ip=/172.18.9.55 cmd=[color=red]create [/color]src=/user/tmp/test.jpg dst=null perm=dell:supergroup:rw-r--r-- 2012-07-10 15:00:04,373 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/tmp/test.jpg. blk_5234108089936612403_9027 2012-07-10 15:00:04,401 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.17.0.122:50010 is added to blk_5234108089936612403_9027 size [color=red]4096[/color] 2012-07-10 15:00:04,403 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.17.0.123:50010 is added to blk_5234108089936612403_9027 size [color=red]4096[/color] 2012-07-10 15:00:04,406 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.17.0.121:50010 is added to blk_5234108089936612403_9027 size [color=red]4096[/color] 2012-07-10 15:00:04,409 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on file /user/tmp/test.jpg from client DFSClient_771894663 2012-07-10 15:00:04,409 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /user/tmp/test.jpg is closed by DFSClient_771894663 2012-07-10 15:00:06,429 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_5234108089936612403 is added to invalidSet of 172.17.0.122:50010 2012-07-10 15:00:06,429 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_5234108089936612403 is added to invalidSet of 172.17.0.123:50010 2012-07-10 15:00:06,429 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_5234108089936612403 is added to invalidSet of 172.17.0.121:50010 2012-07-10 15:00:06,430 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=dell ip=/172.18.9.55 cmd=[color=red]delete [/color]src=/user/tmp/test.jpg dst=null perm=null 2012-07-10 15:00:06,431 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=dell ip=/172.18.9.55 cmd=[color=red]create [/color]src=/user/tmp/test.jpg dst=null perm=dell:supergroup:rw-r--r-- 2012-07-10 15:00:06,435 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/tmp/test.jpg. blk_5499311137188998743_9028 2012-07-10 15:00:06,464 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.17.0.120:50010 is added to blk_5499311137188998743_9028 size [color=red]39455[/color] 2012-07-10 15:00:06,465 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.17.0.122:50010 is added to blk_5499311137188998743_9028 size [color=red]39455[/color] 2012-07-10 15:00:06,467 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.17.0.121:50010 is added to blk_5499311137188998743_9028 size [color=red]39455[/color] 2012-07-10 15:00:06,469 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on file /user/tmp/test.jpg from client DFSClient_771894663 最终文件大小为:39455 求各位大神给小弟解决办法。
spark shell在存运算结果到hdfs时报java.io.IOException: Not a file: hdfs://mini1:9000/spark/res
scala> sc.textFile("hdfs://mini1:9000/spark").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).saveAsTextFile("hdfs://mini1:9000/spark/res2") 执行上面的代码出错,这个目录在hdfs下是有的,而且就算没有也会创建。还有就是我运行的代码中是保存到res2目录 ,这里为什么报没有res目录 18/11/05 19:06:44 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes java.io.IOException: Not a file: hdfs://mini1:9000/spark/res at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:320) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:28) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35) at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37) at $iwC$$iwC$$iwC$$iwC.<init>(<console>:39) at $iwC$$iwC$$iwC.<init>(<console>:41) at $iwC$$iwC.<init>(<console>:43) at $iwC.<init>(<console>:45) at <init>(<console>:47) at .<init>(<console>:51) at .<clinit>(<console>) at .<init>(<console>:7) at .<clinit>(<console>) at $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657) at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
hadoop namenode无法启动
INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: dao:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system... 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: NameNode metrics system stopped. 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 17/03/16 10:54:06 FATAL namenode.NameNode: Failed to start namenode. java.net.BindException: Port in use: dao:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more
hadoop2.5.2格式化hdfs报错
16/05/31 20:30:38 WARN namenode.FSEditLog: No class configured for node2, dfs.namenode.edits.journal-plugin.node2 is empty 16/05/31 20:30:38 FATAL namenode.NameNode: Exception in namenode join java.lang.IllegalArgumentException: No class configured for node2 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getJournalClass(FSEditLog.java:1532) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1546) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:267) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:233) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:920) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) 16/05/31 20:30:38 INFO util.ExitUtil: Exiting with status 1 hdfs-site.xml配置如下 <configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>node1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>node2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>node1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>node2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>node2:8485;node3:8485;node4:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/hadoop/journalnodedata</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration>
hadoop执行wordcount报错
WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 17/05/03 02:35:31 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) 17/05/03 02:35:31 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null 17/05/03 02:35:31 WARN hdfs.DFSClient: Could not get block locations. Source file "/zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar" - Aborting... 17/05/03 02:35:31 INFO mapred.JobClient: Cleaning up the staging area hdfs://192.168.136.131:9000/zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001 17/05/03 02:35:31 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) 17/05/03 02:35:31 ERROR hdfs.DFSClient: Failed to close file /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) 报错内容是没有节点,但是用jps查看都已经正常启动,而且防火墙也关闭了,节点间通信也都正常
spark计算hdfs上的文件时报错
scala> val rdd = sc.textFile("hdfs://...") scala> rdd.count java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$AppendRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
python操作hdfs时抛出hdfs.util.HdfsError: None的异常?
python操作hdfs向hdfs上传文件时抛出异常 File "E:/代码/2019-6/6-10/myhdfs.py", line 7, in <module> client.upload('/foo','E:\\资料\\py.txt') File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 605, in upload raise err File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 594, in upload _upload(path_tuple) File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 524, in _upload self.write(_temp_path, wrap(reader, chunk_size, progress), **kwargs) File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 456, in write buffersize=buffersize, File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 112, in api_handler raise err File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 107, in api_handler **self.kwargs File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 210, in _request _on_error(response) File "E:\python-01\bin\lib\site-packages\hdfs\client.py", line 50, in _on_error raise HdfsError(message, exception=exception) hdfs.util.HdfsError: None
windows 配置spark成功 但是无法启动Hdfs namenode
16/08/22 09:44:14 ERROR namenode.NameNode: Failed to start namenode. java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.a ccess0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:6 09) at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyze Storage(Storage.java:490) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSI mage.java:322) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead( FSImage.java:215) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam esystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa mesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo de.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j ava:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java: 811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java: 795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo de.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15 54 求各位大神帮助,配置的是单机版的,spark已可以运行,但是hdfs不可以
CDH主节点服务器 重启后,无法启动 namenode ,namenode格式化也不成功,如何解决
*日志如下: org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$LogHeaderCorruptException: Unexpected version of the file system log file: -1. Current version = -60. at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.readLogVersion(EditLogFileInputStream.java:397) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:146) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.getVersion(EditLogFileInputStream.java:266) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.validateEditLog(EditLogFileInputStream.java:318) at org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.validateLog(FileJournalManager.java:544) at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.recoverUnfinalizedSegments(FileJournalManager.java:406) at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1478) at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:827) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:686) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,421 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for (journal JournalAndStream(mgr=FileJournalManager(root=/dfs/nn), stream=null)) org.apache.hadoop.hdfs.server.namenode.JournalManager$CorruptionException: In-progress edit log file is corrupt: EditLogFile(file=/dfs/nn/current/edits_inprogress_0000000000000109383.corrupt,first=0000000000000109383,last=-000000000000012345,inProgress=true,hasCorruptHeader=true) at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.recoverUnfinalizedSegments(FileJournalManager.java:410) at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1478) at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:827) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:686) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,459 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Disabling journal JournalAndStream(mgr=FileJournalManager(root=/dfs/nn), stream=null) 2018-02-11 17:17:47,460 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for too many journals 2018-02-11 17:17:47,460 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Skipping jas JournalAndStream(mgr=FileJournalManager(root=/dfs/nn), stream=null) since it's disabled 2018-02-11 17:17:47,471 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 109382 but unable to find any edit logs containing txid 109382 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1617) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1575) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:704) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,774 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@alexnode1.cdh:50070 2018-02-11 17:17:47,785 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 2018-02-11 17:17:47,785 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 2018-02-11 17:17:47,910 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2018-02-11 17:17:47,911 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2018-02-11 17:17:47,911 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2018-02-11 17:17:47,911 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 109382 but unable to find any edit logs containing txid 109382 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1617) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1575) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:704) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,913 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2018-02-11 17:17:48,085 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at alexnode1.cdh/192.168.1.68 ************************************************************/
hadoop集群启动后namenode自动关闭
2017-09-05 10:14:17,973 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON, in safe mode extension. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds. 2017-09-05 10:14:23,736 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.updateBlockForPipeline from 172.28.14.61:41497 Call#164039 Retry#12 org.apache.hadoop.ipc.RetriableException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot get a new generation stamp and an access token for block BP-1552766309-172.28.41.193-1503397713205:blk_1073742745_1926. Name node is in safe mode. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 4 seconds. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1331) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6234) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6309) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:806) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot get a new generation stamp and an access token for block BP-1552766309-172.28.41.193-1503397713205:blk_1073742745_1926. Name node is in safe mode. The reported blocks 189 has reached the threshold 0.9990 of total blocks 189. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 4 seconds. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327) ... 13 more 2017-09-05 10:14:27,976 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 55 secs 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 2 datanodes 2017-09-05 10:14:27,977 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 190 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 3 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 1 2017-09-05 10:14:28,013 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 29 msec 2017-09-05 10:14:59,141 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 0 2017-09-05 10:14:59,141 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 0 2017-09-05 10:14:59,143 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 1 2017-09-05 10:14:59,145 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 1 2017-09-05 10:14:59,185 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command listStatus is: 1 2017-09-05 10:14:59,186 INFO org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager: topN size for command * is: 1 2017-09-05 10:16:50,848 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15839 2017-09-05 10:16:50,849 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 88 18 2017-09-05 10:16:50,883 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 120 20 2017-09-05 10:16:50,910 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015839 -> /home/hadoop/hadoop_name/current/edits_0000000000000015839-0000000000000015841 2017-09-05 10:16:50,915 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15842 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:18:51,193 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15842 2017-09-05 10:18:51,194 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 19 8 2017-09-05 10:18:51,372 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 129 76 2017-09-05 10:18:51,405 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015842 -> /home/hadoop/hadoop_name/current/edits_0000000000000015842-0000000000000015843 2017-09-05 10:18:51,406 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15844 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.28.41.196 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 15844 2017-09-05 10:20:52,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 39 341 2017-09-05 10:20:52,258 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 103 413 2017-09-05 10:20:52,284 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop_name/current/edits_inprogress_0000000000000015844 -> /home/hadoop/hadoop_name/current/edits_0000000000000015844-0000000000000015845 2017-09-05 10:20:52,284 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 15846 报这样的错误是不是有问题
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
GitHub标星近1万:只需5秒音源,这个网络就能实时“克隆”你的声音
作者 | Google团队 译者 | 凯隐 编辑 | Jane 出品 | AI科技大本营(ID:rgznai100) 本文中,Google 团队提出了一种文本语音合成(text to speech)神经系统,能通过少量样本学习到多个不同说话者(speaker)的语音特征,并合成他们的讲话音频。此外,对于训练时网络没有接触过的说话者,也能在不重新训练的情况下,仅通过未知...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
【管理系统课程设计】美少女手把手教你后台管理
【文章后台管理系统】URL设计与建模分析+项目源码+运行界面 栏目管理、文章列表、用户管理、角色管理、权限管理模块(文章最后附有源码) 1. 这是一个什么系统? 1.1 学习后台管理系统的原因 随着时代的变迁,现如今各大云服务平台横空出世,市面上有许多如学生信息系统、图书阅读系统、停车场管理系统等的管理系统,而本人家里就有人在用烟草销售系统,直接在网上完成挑选、购买与提交收货点,方便又快捷。 试想,若没有烟草销售系统,本人家人想要购买烟草,还要独自前往药...
4G EPS 第四代移动通信系统
目录 文章目录目录4G 与 LTE/EPCLTE/EPC 的架构E-UTRANE-UTRAN 协议栈eNodeBEPCMMES-GWP-GWHSSLTE/EPC 协议栈概览 4G 与 LTE/EPC 4G,即第四代移动通信系统,提供了 3G 不能满足的无线网络宽带化,主要提供数据(上网)业务。而 LTE(Long Term Evolution,长期演进技术)是电信领域用于手机及数据终端的高速无线通...
相关热词 c#处理浮点数 c# 生成字母数字随机数 c# 动态曲线 控件 c# oracle 开发 c#选择字体大小的控件 c# usb 批量传输 c#10进制转8进制 c#转base64 c# 科学计算 c#下拉列表获取串口
立即提问