Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error,同时无法put文件到hdfs

hadoop版本是3.1,ubuntu是18,
问题一:浏览hdfs目录显示:
Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error
问题二:
namenode的log如下:

438 WARN org.eclipse.jetty.servlet.ServletHandler: Error for /webhdfs/v1/
java.lang.NoClassDefFoundError: javax/activation/DataSource
    at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:457)
    at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65)
    at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133)
    at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85)
    at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156)
    at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93)
    at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473)
    at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319)
    at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
    at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
    at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:236)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:186)
    at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:146)
    at javax.xml.bind.ContextFinder.find(ContextFinder.java:350)
    at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:446)
    at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:409)
    at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.<init>(WadlApplicationContextImpl.java:103)
    at com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:100)
    at com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169)
    at com.sun.jersey.server.impl.application.RootResourceUriRules.<init>(RootResourceUriRules.java:106)
    at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1359)
    at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
    at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
    at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
    at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
    at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)
    at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:790)
    at com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:509)
    at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)
    at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)
    at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)
    at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
    at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:577)
    at javax.servlet.GenericServlet.init(GenericServlet.java:244)
    at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:643)
    at org.eclipse.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:499)
    at org.eclipse.jetty.servlet.ServletHolder.ensureInstance(ServletHolder.java:791)
    at org.eclipse.jetty.servlet.ServletHolder.prepare(ServletHolder.java:776)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:579)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource
    at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
    at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
    at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
    ... 65 more
2019-06-18 15:35:01,950 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/
java.lang.NullPointerException
    at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189)
    at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446)
    at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
    at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)
2019-06-18 15:39:17,698 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 56 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 22 
2019-06-18 15:39:25,202 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/
java.lang.NullPointerException
    at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189)
    at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446)
    at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
    at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)
2019-06-18 15:39:45,858 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/
java.lang.NullPointerException
    at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189)
    at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446)
    at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
    at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)

附datanode日志:
2019-06-18 14:52:36,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = gx-virtual-machine/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.2.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-kms-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.0.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 2019-01-08T06:08Z
STARTUP_MSG: java = 11.0.3
************************************************************/
2019-06-18 14:52:36,863 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-06-18 14:52:41,503 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:42,424 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2019-06-18 14:52:46,504 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-18 14:52:46,511 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2019-06-18 14:52:46,566 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is gx-virtual-machine
2019-06-18 14:52:46,567 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-18 14:52:46,592 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2019-06-18 14:52:46,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:9866
2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwidth is 10485760 bytes/s
2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 50
2019-06-18 14:52:47,198 INFO org.eclipse.jetty.util.log: Logging initialized @15269ms
2019-06-18 14:52:48,022 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-18 14:52:48,062 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2019-06-18 14:52:48,161 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-06-18 14:52:48,556 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 44121
2019-06-18 14:52:48,580 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-06T01:11:56+08:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-06-18 14:52:49,011 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7876d598{/logs,file:///usr/local/hadoop/logs/,AVAILABLE}
2019-06-18 14:52:49,018 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@5af28b27{/static,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/static/,AVAILABLE}
2019-06-18 14:52:50,151 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@547e29a4{/,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{/datanode}
2019-06-18 14:52:50,242 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@6f45a1a0{HTTP/1.1,[http/1.1]}{localhost:44121}
2019-06-18 14:52:50,243 INFO org.eclipse.jetty.server.Server: Started @18329ms
2019-06-18 14:52:52,165 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:9864
2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hadoop
2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2019-06-18 14:52:52,242 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2019-06-18 14:52:52,720 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2019-06-18 14:52:52,880 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9867
2019-06-18 14:52:54,839 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:9867
2019-06-18 14:52:55,160 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2019-06-18 14:52:55,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices:
2019-06-18 14:52:55,418 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service
2019-06-18 14:52:55,532 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-06-18 14:52:55,561 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9867: starting
2019-06-18 14:52:58,314 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2019-06-18 14:52:58,329 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2019-06-18 14:52:58,458 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 55815@gx-virtual-machine
2019-06-18 14:52:58,478 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory with location [DISK]file:/usr/local/hadoop/tmp/dfs/data is not formatted for namespace 317473294. Formatting...
2019-06-18 14:52:58,479 INFO org.apache.hadoop.hdfs.server.common.Storage: Generated new storageID DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e for directory /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:58,749 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:52:58,750 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Block pool storage directory for location [DISK]file:/usr/local/hadoop/tmp/dfs/data and block pool id BP-200946205-127.0.1.1-1560840480894 is not formatted. Formatting ...
2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-200946205-127.0.1.1-1560840480894 directory /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current
2019-06-18 14:52:58,772 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=317473294;bpid=BP-200946205-127.0.1.1-1560840480894;lv=-57;nsInfo=lv=-65;cid=CID-eb45654d-0bc6-4348-b02f-e03603e1ae37;nsid=317473294;c=1560840480894;bpid=BP-200946205-127.0.1.1-1560840480894;dnuuid=null
2019-06-18 14:52:58,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6a2049c6-1a18-437a-97bd-51c5bb65a639
2019-06-18 14:52:59,549 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e
2019-06-18 14:52:59,553 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - [DISK]file:/usr/local/hadoop/tmp/dfs/data, StorageType: DISK
2019-06-18 14:52:59,615 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2019-06-18 14:52:59,680 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:59,801 INFO org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker: Scheduled health check for volume /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:59,809 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:52:59,839 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data...
2019-06-18 14:53:00,166 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-200946205-127.0.1.1-1560840480894 on /usr/local/hadoop/tmp/dfs/data: 327ms
2019-06-18 14:53:00,168 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-200946205-127.0.1.1-1560840480894: 359ms
2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data...
2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice: Replica Cache file: /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current/replicas doesn't exist
2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data: 17ms
2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-200946205-127.0.1.1-1560840480894: 27ms
2019-06-18 14:53:00,208 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:53:00,221 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): finished scanning block pool BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:53:00,401 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): no suitable block pools found to scan. Waiting 1814399799 ms.
2019-06-18 14:53:00,418 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 2019/6/18 下午8:05 with interval of 21600000ms
2019-06-18 14:53:00,463 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 beginning handshake with NN
2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 successfully registered with NN
2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/127.0.0.1:9000 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2019-06-18 14:53:01,524 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb210af820fa10abf, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 19 msec to generate and 231 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2019-06-18 14:53:01,525 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-200946205-127.0.1.1-1560840480894
2019-06-18 15:44:37,567 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001 src: /127.0.0.1:34774 dest: /127.0.0.1:9866
2019-06-18 15:44:37,733 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34774, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, duration(ns): 75831098
2019-06-18 15:44:37,737 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,256 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002 src: /127.0.0.1:34776 dest: /127.0.0.1:9866
2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34776, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, duration(ns): 5252820
2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,340 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003 src: /127.0.0.1:34778 dest: /127.0.0.1:9866
2019-06-18 15:44:38,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34778, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, duration(ns): 19816531
2019-06-18 15:44:38,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,428 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004 src: /127.0.0.1:34780 dest: /127.0.0.1:9866
2019-06-18 15:44:38,455 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34780, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, duration(ns): 9820674
2019-06-18 15:44:38,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005 src: /127.0.0.1:34782 dest: /127.0.0.1:9866
2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34782, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, duration(ns): 9424051
2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,569 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006 src: /127.0.0.1:34784 dest: /127.0.0.1:9866
2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34784, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, duration(ns): 6662498
2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,642 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007 src: /127.0.0.1:34786 dest: /127.0.0.1:9866
2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34786, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, duration(ns): 5047916
2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,713 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008 src: /127.0.0.1:34788 dest: /127.0.0.1:9866
2019-06-18 15:44:38,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34788, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, duration(ns): 8532382
2019-06-18 15:44:38,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009 src: /127.0.0.1:34790 dest: /127.0.0.1:9866
2019-06-18 15:44:38,807 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34790, dest: /127.0.0.1:9866, bytes: 690, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, duration(ns): 5589094
2019-06-18 15:44:38,813 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:01,961 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010 src: /127.0.0.1:36578 dest: /127.0.0.1:9866
2019-06-19 09:54:02,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36578, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, duration(ns): 32739756
2019-06-19 09:54:02,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,125 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011 src: /127.0.0.1:36580 dest: /127.0.0.1:9866
2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36580, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, duration(ns): 12137675
2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012 src: /127.0.0.1:36582 dest: /127.0.0.1:9866
2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36582, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, duration(ns): 8740891
2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,307 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013 src: /127.0.0.1:36584 dest: /127.0.0.1:9866
2019-06-19 09:54:02,322 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36584, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, duration(ns): 8680367
2019-06-19 09:54:02,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014 src: /127.0.0.1:36586 dest: /127.0.0.1:9866
2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36586, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, duration(ns): 8474258
2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015 src: /127.0.0.1:36588 dest: /127.0.0.1:9866
2019-06-19 09:54:02,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36588, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, duration(ns): 6946259
2019-06-19 09:54:02,503 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016 src: /127.0.0.1:36590 dest: /127.0.0.1:9866
2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36590, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, duration(ns): 6602106
2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,635 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017 src: /127.0.0.1:36592 dest: /127.0.0.1:9866
2019-06-19 09:54:02,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36592, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017, duration(ns): 9690339
2019-06-19 09:54:02,654 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:

1个回答

已解决,是jdk的问题,jdk11换成jdk8后就可以了,然后记得stop-dfs.sh后再start-dfs.sh

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
用passport-jwt验证token时报错'JwtStrategy requires a function to retrieve jwt from requests '
-
laravel自定义session驱动后csrf验证出现419页面到底是哪出问题了?
-
FFmpeg C++ 的 avcodec_encode_video2 返回 -40
-
pytorch 中 写入class ConvNet(nn.Module): 语句之后 出现错误NameError: name 'ConvNet' is not defined 这是怎么回事?
-
QtCreator 配置 MSVC 出错
-
APP闪退是怎么回事啊啊啊啊?
-
eclipse使用阿里云镜像更新maven索引失败
-
解码出的pcm有问题,我要怎么修改增加代码?
-
Catch That Cow 编码实现
-
rllib.error.HTTPError: HTTP Error 403: Forbidden
-
求解 啊 java maven项目
-
maven打包本地jar包出现问题,求解答
-
Rock Skipping 怎么来代码编写的
-
代码没有错误,为什么程序没运行完就直接结束了?
-
求助,在R语言搭建conda环境时出现了很令人无语的问题
-
win10下编译hadoop eclipse plugin报错
-
MAVEN求解No plugin found for prefix 'tomcat'
-
安装eclipse出错log,什么原因
-
eclipse导入lucene出错
-
程序员真是太太太太太有趣了!!!
网络上虽然已经有了很多关于程序员的话题,但大部分人对这个群体还是很陌生。我们在谈论程序员的时候,究竟该聊些什么呢?各位程序员大佬们,请让我听到你们的声音!不管你是前端开发...
史上最详细的IDEA优雅整合Maven+SSM框架(详细思路+附带源码)
网上很多整合SSM博客文章并不能让初探ssm的同学思路完全的清晰,可以试着关掉整合教程,摇两下头骨,哈一大口气,就在万事具备的时候,开整,这个时候你可能思路全无 ~中招了咩~ ,还有一些同学依旧在使用eclipse或者Myeclipse开发,我想对这些朋友说IDEA 的编译速度很快,人生苦短,来不及解释了,直接上手idea吧。这篇文章每一步搭建过程都测试过了,应该不会有什么差错。本文章还有个比较优秀的特点,就是idea的使用,基本上关于idea的操作都算是比较详细的,所以不用太担心不会撸idea!最后,本文
2019年9月全国程序员工资统计
2019年9月2日,统计了某招聘网站上的所有程序员招聘信息。并汇总如下。
吃人的那些 Java 名词:对象、引用、堆、栈
作为一个有着 8 年 Java 编程经验的 IT 老兵,说起来很惭愧,我被 Java 当中的四五个名词一直困扰着:**对象、引用、堆、栈、堆栈**(栈可同堆栈,因此是四个名词,也是五个名词)。每次我看到这几个名词,都隐隐约约觉得自己在被一只无形的大口慢慢地吞噬,只剩下满地的衣服碎屑(为什么不是骨头,因为骨头也好吃)。
我花了一夜用数据结构给女朋友写个H5走迷宫游戏
起因 又到深夜了,我按照以往在csdn和公众号写着数据结构!这占用了我大量的时间!我的超越妹妹严重缺乏陪伴而 怨气满满! 而女朋友时常埋怨,认为数据结构这么抽象难懂的东西没啥作用,常会问道:天天写这玩意,有啥作用。而我答道:能干事情多了,比如写个迷宫小游戏啥的! 当我码完字准备睡觉时:写不好别睡觉! 分析 如果用数据结构与算法造出东西来呢? ...
接班马云的为何是张勇?
上海人、职业经理人、CFO 背景,集齐马云三大不喜欢的张勇怎么就成了阿里接班人? 作者|王琳 本文经授权转载自燃财经(ID:rancaijing) 9月10日,张勇转正了,他由阿里巴巴董事局候任主席正式成为阿里巴巴董事局主席,这也意味着阿里巴巴将正式开启“逍遥子时代”。 从2015年接任CEO开始,张勇已经将阿里巴巴股价拉升了超过200%。但和马云强大的个人光环比,张勇显得尤其...
让程序员崩溃的瞬间(非程序员勿入)
今天给大家带来点快乐,程序员才能看懂。 来源:https://zhuanlan.zhihu.com/p/47066521 1. 公司实习生找 Bug 2.在调试时,将断点设置在错误的位置 3.当我有一个很棒的调试想法时 4.偶然间看到自己多年前写的代码 5.当我第一次启动我的单元测试时 ...
用Python分析2000款避孕套,得出这些有趣的结论
到现在为止,我们的淘宝教程已经写到了第四篇,前三篇分别是: 第一篇:Python模拟登录淘宝,详细讲解如何使用requests库登录淘宝pc端。 第二篇:淘宝自动登录2.0,新增Cookies序列化,教大家如何将cookies保存起来。 第三篇:Python爬取淘宝商品避孕套,教大家如何爬取淘宝pc端商品信息。 今天,我们来看看淘宝系列的第四篇 我们在上一篇的时候已经将淘宝数据爬取下来了,...
Spring Cloud(11)——基于RocketMQ的Stream实现
基于RocketMQ的Stream实现 Spring Cloud Stream是一个消息收发的框架,它提供了一套标准,应用程序只需要按照它的标准进行消息的收发,而不用关注具体的实现机制。具体的实现可以基于不同的消息中间件进行不同的实现,比如Kafka的实现、RabbitMQ的实现、RocketMQ的实现等。官方已经提供了Kafka和RabbitMQ的实现,RocketMQ的实现由Alibaba负责...
Java 13 新特性全面解读
作者 l Hollis 本文经授权转载自Hollis(ID:hollischuang) 2017年8月,JCP执行委员会提出将Java的发布频率改为每六个月一次,新的发布周期严格遵循时间点,将在每年的3月份和9月份发布。 目前该版本包含的特性已经全部固定,主要包含以下五个: JEP 350,Dynamic CDS Archives JEP 351,ZGC: Uncomm...
分享靠写代码赚钱的一些门路
作者 mezod,译者 josephchang10如今,通过自己的代码去赚钱变得越来越简单,不过对很多人来说依然还是很难,因为他们不知道有哪些门路。今天给大家分享一个精彩...
技术人员要拿百万年薪,必须要经历这9个段位
很多人都问,技术人员如何成长,每个阶段又是怎样的,如何才能走出当前的迷茫,实现自我的突破。所以我结合我自己10多年的从业经验,总结了技术人员成长的9个段位,希望对大家的职...
面试官:兄弟,说说基本类型和包装类型的区别吧
Java 的每个基本类型都对应了一个包装类型,比如说 int 的包装类型为 Integer,double 的包装类型为 Double。基本类型和包装类型的区别主要有以下 4 点。
多线程编程是后台开发人员的基本功
这里先给大家分享一个小故事:在我刚开始参加工作的那年,公司安排我开发一款即时通讯软件(IM,类似于 QQ 聊天软件),在这之前我心里也知道如果多线程操作一个整型值是要加锁...
进程和线程的区别(超详细)
进程和线程 进程 一个在内存中运行的应用程序。每个进程都有自己独立的一块内存空间,一个进程可以有多个线程,比如在Windows系统中,一个运行的xx.exe就是一个进程。 线程 进程中的一个执行任务(控制单元),负责当前进程中程序的执行。一个进程至少有一个线程,一个进程可以运行多个线程,多个线程可共享数据。 与进程不同的是同类的多个线程共享进程的堆和方法区资源,但每个线程有自己的程序计数器、虚拟...
动画:用动画给面试官解释 TCP 三次握手过程
作者 | 小鹿 来源 | 公众号:小鹿动画学编程 写在前边 TCP 三次握手过程对于面试是必考的一个,所以不但要掌握 TCP 整个握手的过程,其中有些小细节也更受到面试官的青睐。 对于这部分掌握以及 TCP 的四次挥手,小鹿将会以动画的形式呈现给每个人,这样将复杂的知识简单化,理解起来也容易了很多,尤其对于一个初学者来说。 学习导图 一、TCP 是什么? TCP(Transmissio...
为什么程序员在学习编程的时候什么都记不住?
在程序员的职业生涯中,记住所有你接触过的代码是一件不可能的事情!那么我们该如何解决这一问题?作者 |Dylan Mestyanek译者 | 弯月,责编 | 屠敏出品 |...
500行代码,教你用python写个微信飞机大战
这几天在重温微信小游戏的飞机大战,玩着玩着就在思考人生了,这飞机大战怎么就可以做的那么好,操作简单,简单上手。 帮助蹲厕族、YP族、饭圈女孩在无聊之余可以有一样东西让他们振作起来!让他们的左手 / 右手有节奏有韵律的朝着同一个方向来回移动起来! 这是史诗级的发明,是浓墨重彩的一笔,是…… 在一阵抽搐后,我结束了游戏,瞬时觉得一切都索然无味,正在我进入贤者模式时,突然想到,如果我可以让更多人已不同的方式体会到这种美轮美奂的感觉岂不美哉? 所以我打开电脑,创建了一个 `plan_game.py`……
唐僧团队要裁员,你会裁谁?
提问: 西游记取经团为了节约成本,唐太宗需要在这个团队里裁掉一名队员,该裁掉哪一位呢,为什么? 为了完成西天取经任务,组成取经团队,成员有唐僧、孙悟空、猪八戒、沙和尚、白龙马。 高层领导: 观音 项目经理: 唐僧 技术核心: 孙悟空 普通团员: 猪八戒、沙和尚 司机: 白龙马 这是个很有意思的项目团队 项目经理:唐僧 得道高僧。 唐僧作为项目经理,有很坚韧的品性和极高的原则性,不达目的不罢...
2019诺贝尔经济学奖得主:贫穷的本质是什么?
2019年诺贝尔经济学奖,颁给了来自麻省理工学院的 阿巴希·巴纳吉(Abhijit Vinayak Banerjee)、艾丝特·杜芙若(Esther Duflo)夫妇和哈...
linux:最常见的linux命令(centOS 7.6)
最常见,最频繁使用的20个基础命令如下: 皮一下,这都是干货偶,大佬轻喷 一、linux关机命令: 1.shutdown命令安全地将系统关机(推荐)参数说明: [-r] 重启计算器。 [-h] 关机后关闭电源〔halt〕。 [-c] cancel current process取消目前正在执行的关机程序。 [-time] 设定关机〔shutdown〕前的时间。 shutdown -h now ...
相关热词 c#二进制字符转字节 c# rc4 c#中md5加密 c# 新建mvc项目 c# 引用mysql c#动态加载非托管dll c# 两个表数据同步 c# 返回浮点json c# imap 链接状态 c# 漂亮字