Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error,同时无法put文件到hdfs

hadoop版本是3.1,ubuntu是18,
问题一:浏览hdfs目录显示:
Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error
问题二:
namenode的log如下:

438 WARN org.eclipse.jetty.servlet.ServletHandler: Error for /webhdfs/v1/
java.lang.NoClassDefFoundError: javax/activation/DataSource
    at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:457)
    at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65)
    at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133)
    at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85)
    at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156)
    at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93)
    at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473)
    at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319)
    at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
    at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
    at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:236)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:186)
    at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:146)
    at javax.xml.bind.ContextFinder.find(ContextFinder.java:350)
    at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:446)
    at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:409)
    at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.<init>(WadlApplicationContextImpl.java:103)
    at com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:100)
    at com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169)
    at com.sun.jersey.server.impl.application.RootResourceUriRules.<init>(RootResourceUriRules.java:106)
    at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1359)
    at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
    at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
    at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
    at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
    at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)
    at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:790)
    at com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:509)
    at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)
    at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)
    at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)
    at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
    at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:577)
    at javax.servlet.GenericServlet.init(GenericServlet.java:244)
    at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:643)
    at org.eclipse.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:499)
    at org.eclipse.jetty.servlet.ServletHolder.ensureInstance(ServletHolder.java:791)
    at org.eclipse.jetty.servlet.ServletHolder.prepare(ServletHolder.java:776)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:579)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource
    at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
    at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
    at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
    ... 65 more
2019-06-18 15:35:01,950 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/
java.lang.NullPointerException
    at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189)
    at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446)
    at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
    at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)
2019-06-18 15:39:17,698 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 56 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 22 
2019-06-18 15:39:25,202 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/
java.lang.NullPointerException
    at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189)
    at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446)
    at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
    at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)
2019-06-18 15:39:45,858 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/
java.lang.NullPointerException
    at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189)
    at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446)
    at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
    at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
    at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.base/java.lang.Thread.run(Thread.java:834)

附datanode日志:
2019-06-18 14:52:36,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = gx-virtual-machine/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.2.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-kms-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.0.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 2019-01-08T06:08Z
STARTUP_MSG: java = 11.0.3
************************************************************/
2019-06-18 14:52:36,863 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-06-18 14:52:41,503 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:42,424 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2019-06-18 14:52:46,504 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-18 14:52:46,511 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2019-06-18 14:52:46,566 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is gx-virtual-machine
2019-06-18 14:52:46,567 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-18 14:52:46,592 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2019-06-18 14:52:46,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:9866
2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwidth is 10485760 bytes/s
2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 50
2019-06-18 14:52:47,198 INFO org.eclipse.jetty.util.log: Logging initialized @15269ms
2019-06-18 14:52:48,022 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-18 14:52:48,062 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2019-06-18 14:52:48,161 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-06-18 14:52:48,556 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 44121
2019-06-18 14:52:48,580 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-06T01:11:56+08:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-06-18 14:52:49,011 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7876d598{/logs,file:///usr/local/hadoop/logs/,AVAILABLE}
2019-06-18 14:52:49,018 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@5af28b27{/static,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/static/,AVAILABLE}
2019-06-18 14:52:50,151 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@547e29a4{/,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{/datanode}
2019-06-18 14:52:50,242 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@6f45a1a0{HTTP/1.1,[http/1.1]}{localhost:44121}
2019-06-18 14:52:50,243 INFO org.eclipse.jetty.server.Server: Started @18329ms
2019-06-18 14:52:52,165 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:9864
2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hadoop
2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2019-06-18 14:52:52,242 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2019-06-18 14:52:52,720 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2019-06-18 14:52:52,880 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9867
2019-06-18 14:52:54,839 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:9867
2019-06-18 14:52:55,160 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2019-06-18 14:52:55,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices:
2019-06-18 14:52:55,418 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service
2019-06-18 14:52:55,532 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-06-18 14:52:55,561 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9867: starting
2019-06-18 14:52:58,314 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2019-06-18 14:52:58,329 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2019-06-18 14:52:58,458 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 55815@gx-virtual-machine
2019-06-18 14:52:58,478 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory with location [DISK]file:/usr/local/hadoop/tmp/dfs/data is not formatted for namespace 317473294. Formatting...
2019-06-18 14:52:58,479 INFO org.apache.hadoop.hdfs.server.common.Storage: Generated new storageID DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e for directory /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:58,749 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:52:58,750 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Block pool storage directory for location [DISK]file:/usr/local/hadoop/tmp/dfs/data and block pool id BP-200946205-127.0.1.1-1560840480894 is not formatted. Formatting ...
2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-200946205-127.0.1.1-1560840480894 directory /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current
2019-06-18 14:52:58,772 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=317473294;bpid=BP-200946205-127.0.1.1-1560840480894;lv=-57;nsInfo=lv=-65;cid=CID-eb45654d-0bc6-4348-b02f-e03603e1ae37;nsid=317473294;c=1560840480894;bpid=BP-200946205-127.0.1.1-1560840480894;dnuuid=null
2019-06-18 14:52:58,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6a2049c6-1a18-437a-97bd-51c5bb65a639
2019-06-18 14:52:59,549 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e
2019-06-18 14:52:59,553 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - [DISK]file:/usr/local/hadoop/tmp/dfs/data, StorageType: DISK
2019-06-18 14:52:59,615 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2019-06-18 14:52:59,680 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:59,801 INFO org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker: Scheduled health check for volume /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:52:59,809 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:52:59,839 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data...
2019-06-18 14:53:00,166 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-200946205-127.0.1.1-1560840480894 on /usr/local/hadoop/tmp/dfs/data: 327ms
2019-06-18 14:53:00,168 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-200946205-127.0.1.1-1560840480894: 359ms
2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data...
2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice: Replica Cache file: /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current/replicas doesn't exist
2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data: 17ms
2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-200946205-127.0.1.1-1560840480894: 27ms
2019-06-18 14:53:00,208 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data
2019-06-18 14:53:00,221 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): finished scanning block pool BP-200946205-127.0.1.1-1560840480894
2019-06-18 14:53:00,401 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): no suitable block pools found to scan. Waiting 1814399799 ms.
2019-06-18 14:53:00,418 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 2019/6/18 下午8:05 with interval of 21600000ms
2019-06-18 14:53:00,463 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 beginning handshake with NN
2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 successfully registered with NN
2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/127.0.0.1:9000 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2019-06-18 14:53:01,524 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb210af820fa10abf, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 19 msec to generate and 231 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2019-06-18 14:53:01,525 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-200946205-127.0.1.1-1560840480894
2019-06-18 15:44:37,567 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001 src: /127.0.0.1:34774 dest: /127.0.0.1:9866
2019-06-18 15:44:37,733 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34774, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, duration(ns): 75831098
2019-06-18 15:44:37,737 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,256 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002 src: /127.0.0.1:34776 dest: /127.0.0.1:9866
2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34776, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, duration(ns): 5252820
2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,340 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003 src: /127.0.0.1:34778 dest: /127.0.0.1:9866
2019-06-18 15:44:38,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34778, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, duration(ns): 19816531
2019-06-18 15:44:38,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,428 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004 src: /127.0.0.1:34780 dest: /127.0.0.1:9866
2019-06-18 15:44:38,455 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34780, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, duration(ns): 9820674
2019-06-18 15:44:38,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005 src: /127.0.0.1:34782 dest: /127.0.0.1:9866
2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34782, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, duration(ns): 9424051
2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,569 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006 src: /127.0.0.1:34784 dest: /127.0.0.1:9866
2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34784, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, duration(ns): 6662498
2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,642 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007 src: /127.0.0.1:34786 dest: /127.0.0.1:9866
2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34786, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, duration(ns): 5047916
2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,713 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008 src: /127.0.0.1:34788 dest: /127.0.0.1:9866
2019-06-18 15:44:38,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34788, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, duration(ns): 8532382
2019-06-18 15:44:38,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, type=LAST_IN_PIPELINE terminating
2019-06-18 15:44:38,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009 src: /127.0.0.1:34790 dest: /127.0.0.1:9866
2019-06-18 15:44:38,807 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34790, dest: /127.0.0.1:9866, bytes: 690, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, duration(ns): 5589094
2019-06-18 15:44:38,813 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:01,961 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010 src: /127.0.0.1:36578 dest: /127.0.0.1:9866
2019-06-19 09:54:02,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36578, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, duration(ns): 32739756
2019-06-19 09:54:02,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,125 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011 src: /127.0.0.1:36580 dest: /127.0.0.1:9866
2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36580, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, duration(ns): 12137675
2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012 src: /127.0.0.1:36582 dest: /127.0.0.1:9866
2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36582, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, duration(ns): 8740891
2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,307 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013 src: /127.0.0.1:36584 dest: /127.0.0.1:9866
2019-06-19 09:54:02,322 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36584, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, duration(ns): 8680367
2019-06-19 09:54:02,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014 src: /127.0.0.1:36586 dest: /127.0.0.1:9866
2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36586, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, duration(ns): 8474258
2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015 src: /127.0.0.1:36588 dest: /127.0.0.1:9866
2019-06-19 09:54:02,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36588, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, duration(ns): 6946259
2019-06-19 09:54:02,503 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016 src: /127.0.0.1:36590 dest: /127.0.0.1:9866
2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36590, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, duration(ns): 6602106
2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, type=LAST_IN_PIPELINE terminating
2019-06-19 09:54:02,635 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017 src: /127.0.0.1:36592 dest: /127.0.0.1:9866
2019-06-19 09:54:02,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36592, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017, duration(ns): 9690339
2019-06-19 09:54:02,654 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:

weixin_43849718
weixin_43849718 老哥,同样的问题,Ubuntu是18.0.4,hadoop是3.2.1,换了jdk还是没有用啊
3 个月之前 回复

1个回答

已解决,是jdk的问题,jdk11换成jdk8后就可以了,然后记得stop-dfs.sh后再start-dfs.sh

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Failed to retrieve data from /webhdfs/v1?op=LISTSTATUS: Server Error

请教各位大神: 通过CENTOS 7自带的FIREFOX去查看HDFS,DataNode及NameNode都已经正常显示,但是查阅Utilties中的Browse Directory,产生如下错误: ![图片说明](https://img-ask.csdn.net/upload/201911/30/1575127373_682984.jpg) 还请大神指示一下,应该从哪些配置文件着手去查看故障?谢谢! 相关LOG: 2019-11-30 23:24:17,964 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1778ms No GCs detected 2019-11-30 23:25:41,727 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on default port 9820, call Call#2 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from 192.168.100.125:57842: org.apache.hadoop.fs.ParentNotDirectoryException: /test (is not a directory) 搞定了: JAVA JDK 13.0.1改成JAVA JDK 8就解决了问题,晕倒~整了一周居然是这个问题~~~~

Balancing Act 正确的解决

Description Consider a tree T with N (1 <= N <= 20,000) nodes numbered 1...N. Deleting any node from the tree yields a forest: a collection of one or more trees. Define the balance of a node to be the size of the largest tree in the forest T created by deleting that node from T. For example, consider the tree: Deleting node 4 yields two trees whose member nodes are {5} and {1,2,3,6,7}. The larger of these two trees has five nodes, thus the balance of node 4 is five. Deleting node 1 yields a forest of three trees of equal size: {2,6}, {3,7}, and {4,5}. Each of these trees has two nodes, so the balance of node 1 is two. For each input tree, calculate the node that has the minimum balance. If multiple nodes have equal balance, output the one with the lowest number. Input The first line of input contains a single integer t (1 <= t <= 20), the number of test cases. The first line of each test case contains an integer N (1 <= N <= 20,000), the number of congruence. The next N-1 lines each contains two space-separated node numbers that are the endpoints of an edge in the tree. No edge will be listed twice, and all edges will be listed. Output For each test case, print a line containing two integers, the number of the node with minimum balance and the balance of that node. Sample Input 1 7 2 6 1 2 1 4 4 5 3 7 3 1 Sample Output 1 2

QtCreator 配置 MSVC 出错

我用的QT5.9配置msvc2015不成功,然后配置msvc2013出现同样的错误,求大神指教 :-1: error: Failed to retrieve MSVC Environment from "F:\software\Vs2013\setup\VC\vcvarsall.bat amd64": The command "C:\Windows\system32\cmd.exe" could not be started. :-1: error: Failed to retrieve MSVC Environment from "F:\software\Vs2015\setup\VC\vcvarsall.bat x86": The command "C:\Windows\system32\cmd.exe" could not be started. :-1: error: Failed to retrieve MSVC Environment from "F:\software\Vs2015\setup\VC\vcvarsall.bat amd64": The command "C:\Windows\system32\cmd.exe" could not be started. 都是这类同样的错误。

android studio 中Failed to open database

android studio 中Failed to open database Could not open database ‘ 可能是什么原因呢,有人遇到过吗??用的是SQLite,操作都是最基础的,应该没有错误的,为什么会打不开呢,数据库的名字是“lili.db”,是不是应该取作lili,不要“.db”呢??

HDFS启动警告 WARN util.NativeCodeLoader

启动HDFS报入校警告: [hadoop@hadoop tmp]$ start-dfs.sh 15/02/02 20:39:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-namenode-hadoop.out localhost: starting datanode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-datanode-hadoop.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-hadoop.out 15/02/02 20:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 网上查说是liunx版本问题 [hadoop@hadoop ~]$ uname -a Linux hadoop 2.6.32-431.el6.i686 #1 SMP Fri Nov 22 00:26:36 UTC 2013 i686 i686 i386 GNU/Linux JDK版本 [hadoop@hadoop ~]$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) Client VM (build 19.1-b02, mixed mode, sharing) Hadoop版本: [hadoop@hadoop ~]$ hadoop version Hadoop 2.5.2 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0 Compiled by jenkins on 2014-11-14T23:45Z Compiled with protoc 2.5.0 From source with checksum df7537a4faa4658983d397abf4514320 This command was run using /usr/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar 请高手帮忙看下

求解 啊 java maven项目

[INFO] Scanning for projects... [INFO] Downloading: http://localhost/nexus-2.7.0-06/content/groups/public/org/codehaus/mojo/tomcat-maven-plugin/1.1/tomcat-maven-plugin-1.1.pom [WARNING] Failed to retrieve plugin descriptor for org.codehaus.mojo:tomcat-maven-plugin:1.1: Plugin org.codehaus.mojo:tomcat-maven-plugin:1.1 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.codehaus.mojo:tomcat-maven-plugin:jar:1.1 [INFO] Downloading: http://localhost/nexus-2.7.0-06/content/groups/public/org/apache/maven/plugins/maven-release-plugin/2.3.2/maven-release-plugin-2.3.2.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-release-plugin:2.3.2: Plugin org.apache.maven.plugins:maven-release-plugin:2.3.2 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-release-plugin:jar:2.3.2 [INFO] Downloading: http://localhost/nexus-2.7.0-06/content/groups/public/org/codehaus/mojo/maven-metadata.xml [INFO] Downloading: http://localhost/nexus-2.7.0-06/content/groups/public/org/apache/maven/plugins/maven-metadata.xml [WARNING] Could not transfer metadata org.apache.maven.plugins/maven-metadata.xml from/to nexus (http://localhost/nexus-2.7.0-06/content/groups/public/): Connection to http://localhost refused [WARNING] Could not transfer metadata org.codehaus.mojo/maven-metadata.xml from/to nexus (http://localhost/nexus-2.7.0-06/content/groups/public/): Connection to http://localhost refused [WARNING] Failure to transfer org.apache.maven.plugins/maven-metadata.xml from http://localhost/nexus-2.7.0-06/content/groups/public/ was cached in the local repository, resolution will not be reattempted until the update interval of nexus has elapsed or updates are forced. Original error: Could not transfer metadata org.apache.maven.plugins/maven-metadata.xml from/to nexus (http://localhost/nexus-2.7.0-06/content/groups/public/): Connection to http://localhost refused [WARNING] Failure to transfer org.codehaus.mojo/maven-metadata.xml from http://localhost/nexus-2.7.0-06/content/groups/public/ was cached in the local repository, resolution will not be reattempted until the update interval of nexus has elapsed or updates are forced. Original error: Could not transfer metadata org.codehaus.mojo/maven-metadata.xml from/to nexus (http://localhost/nexus-2.7.0-06/content/groups/public/): Connection to http://localhost refused [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 7.590s [INFO] Finished at: Wed Jul 29 22:26:26 CST 2015 [INFO] Final Memory: 9M/154M [INFO] ------------------------------------------------------------------------ [ERROR] No plugin found for prefix 'tomcat' in the current project and in the plugin groups [org.apache.maven.plugins, org.codehaus.mojo] available from the repositories [local (E:\apache-maven-3.1.1\repository), nexus (http://localhost/nexus-2.7.0-06/content/groups/public/)] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/NoPluginFoundForPrefixException

用passport-jwt验证token时报错'JwtStrategy requires a function to retrieve jwt from requests '

用passport-jwt验证token时报错'JwtStrategy requires a function to retrieve jwt from requests (see option jwtFromRequest)' 代码如下 ``` const JwtStrategy = require('passport-jwt').Strategy, ExtractJwt = require('passport-jwt').ExtractJwt; const mongoose = require("mongoose"); const User = mongoose.model("users"); const keys = require("../config/keys"); const opts = {} opts.JwtFromRequest = ExtractJwt.fromAuthHeaderAsBearerToken(); opts.secretOrKey = keys.secretOrKey; module.exports = passport => { passport.use(new JwtStrategy(opts , (jwt_payload , done) => { console.log(jwt_payload); User.findById(jwt_payload.id) .then(user => { if(user){ return done(null,user); } return done(null,false); }) .catch(err => console.log(err)); })); } ``` 报错 throw new TypeError('JwtStrategy requires a function to retrieve jwt from requests (see option jwtFromRequest)');

如何在Workiva / go-data结构/中使用b树/ plus

<div class="post-text" itemprop="text"> <p>I have been needing an implementation of a binary plus tree. I found one here.</p> <ul> <li><a href="https://github.com/Workiva/go-datastructures/tree/master/btree/plus" rel="nofollow">https://github.com/Workiva/go-datastructures/tree/master/btree/plus</a></li> </ul> <p>But I am not exactly sure how to use it. The other data structures in this repo are quite straightforward. Simply call he package and run the methods. But this btree one is a little confusing</p> <p>I just want a quick example of how to create, insert and retrieve from the tree created by this package.</p> <ol> <li>create btree/plus</li> <li>insert keys</li> <li>retrieve range</li> </ol> </div>

maven打包本地jar包出现问题,求解答

maven本地仓库install外部的jar包,出现这个错误 [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-clean-plugin:2.5: Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-clean-plugin:jar:2.5 Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-install-plugin/2.4/maven-install-plugin-2.4.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-install-plugin:2.4: Plugin org.apache.maven.plugins:maven-install-plugin:2.4 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-install-plugin:jar:2.4 Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-site-plugin/3.3/maven-site-plugin-3.3.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-site-plugin:3.3: Plugin org.apache.maven.plugins:maven-site-plugin:3.3 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-site-plugin:jar:3.3 Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-antrun-plugin/1.3/maven-antrun-plugin-1.3.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-antrun-plugin:1.3: Plugin org.apache.maven.plugins:maven-antrun-plugin:1.3 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-antrun-plugin:jar:1.3 Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-assembly-plugin/2.2-beta-5/maven-assembly-plugin-2.2-beta-5.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-5: Plugin org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-5 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-assembly-plugin:jar:2.2-beta-5 Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-dependency-plugin/2.8/maven-dependency-plugin-2.8.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-dependency-plugin:2.8: Plugin org.apache.maven.plugins:maven-dependency-plugin:2.8 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-dependency-plugin:jar:2.8 Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-release-plugin/2.5.3/maven-release-plugin-2.5.3.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-release-plugin:2.5.3: Plugin org.apache.maven.plugins:maven-release-plugin:2.5.3 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-release-plugin:jar:2.5.3 Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/mojo/maven-metadata.xml Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-metadata.xml [WARNING] Could not transfer metadata org.apache.maven.plugins/maven-metadata.xml from/to central (https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version [WARNING] Could not transfer metadata org.codehaus.mojo/maven-metadata.xml from/to central (https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version [WARNING] Failure to transfer org.apache.maven.plugins/maven-metadata.xml from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original error: Could not transfer metadata org.apache.maven.plugins/maven-metadata.xml from/to central (https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version [WARNING] Failure to transfer org.codehaus.mojo/maven-metadata.xml from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original error: Could not transfer metadata org.codehaus.mojo/maven-metadata.xml from/to central (https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6.191 s [INFO] Finished at: 2018-07-25T21:57:45+08:00 [INFO] ------------------------------------------------------------------------ [ERROR] No plugin found for prefix 'install' in the current project and in the plugin groups [org.apache.maven.plugins, org.codehaus.mojo] available from the repositories [local (C:\Users\Administrator\.m2\repository), central (https://repo.maven.apache.org/maven2)] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/NoPluginFoundForPrefixException

datagurad broker 搭建问题

主库 orcl , 副库:orcldg , datagurad 搭建成功,日志传输正常。 在 配置 datgurad broker 时, ``` DGMGRL> show configuration Configuration - dgfxoptdb Protection Mode: MaxPerformance Databases: orcl - Primary database orcldg - Physical standby database Error: ORA-16664: unable to receive the result from a database Fast-Start Failover: DISABLED Configuration Status: ERROR ``` ``` DGMGRL> show database orcl Database - orcl Role: PRIMARY Intended State: TRANSPORT-ON Instance(s): orcl Database Status: SUCCESS ``` ``` DGMGRL> show database orcldg Database - orcldg Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: (unknown) Apply Lag: (unknown) Real Time Query: OFF Instance(s): orcl Database Status: DGM-17016: failed to retrieve status for database "orcldg" ORA-16664: unable to receive the result from a database ``` 查看主库的 drcorcl 报 ora - 16664 ``` 2017-02-28 02:49:32.727 01010000 1032426228 DMON: Entered rfm_get_chief_lock() for EDIT_RES_PROP, reason 0 2017-02-28 02:49:32.727 01010000 1032426228 DMON: chief lock convert for write op EDIT_RES_PROP 2017-02-28 02:49:32.728 01010000 1032426228 DMON: EDIT_RES_PROP: success 2017-02-28 02:49:32.728 01010000 1032426228 DMON: Evaluating critical status of standbys in configuration 2017-02-28 02:49:32.728 01010000 1032426228 DMON: Evaluating critical status of 0x02010000 2017-02-28 02:49:32.728 01010000 1032426228 ChangeCritical value is FALSE 2017-02-28 02:49:32.728 01010000 1032426228 IsCritical value is FALSE 2017-02-28 02:49:32.728 DMON: Updated Seq.MIV to 0.6, writing metadata to "/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/dr2orcl.dat" 2017-02-28 02:49:32.739 INSV: Received message for inter-instance publication 2017-02-28 02:49:32.740 req ID 1.1.1032426228, opcode EDIT_RES_PROP, phase RESYNCH, flags 8005 2017-02-28 02:49:32.740 INSV: Reply received for message with 2017-02-28 02:49:32.740 req ID 1.1.1032426228, opcode EDIT_RES_PROP, phase RESYNCH 2017-02-28 02:49:32.740 NSV1: (Seq.MIV = 0.6) Start sending metadata file: "/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/dr2orcl.dat" 2017-02-28 02:49:32.750 NSV1: Sending block #1 (containing Seq.MIV = 0.6), 3 blocks 2017-02-28 02:49:32.779 NSV1: (Seq.MIV = 0.6) End metadata file transmission: opcode EDIT_RES_PROP (1.1.1032426228) 2017-02-28 02:49:48.969 NSV1: Site orcldg returned ORA-16664. 2017-02-28 02:49:48.970 01010000 1032426228 DMON: Database orcldg returned ORA-16664 2017-02-28 02:49:48.970 01010000 1032426228 for opcode = EDIT_RES_PROP, phase = RESYNCH, req_id = 1.1.1032426228 2017-02-28 02:49:48.970 01010000 1032426228 DMON: Property dgconnectidentifier set to orcl for database orcl 2017-02-28 02:49:48.974 01010000 1032426228 DMON: Entered rfm_release_chief_lock() for EDIT_RES_PROP 2017-02-28 02:50:18.973 NSV1: Site orcldg returned ORA-16664. 2017-02-28 02:50:18.974 00000000 1032426229 DMON: Database orcldg returned ORA-16664 2017-02-28 02:50:18.974 00000000 1032426229 for opcode = HEALTH_CHECK, phase = BEGIN, req_id = 1.1.1032426229 ``` _查看 副库的drcorcl: 一直报ORA-12514 ``` "/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/dr2ORCLDG.dat" 2017-02-28 02:50:43.779 DRCX: Receiving block #1 (containing Seq.MIV = 0.6), 3 blocks 2017-02-28 02:50:43.806 DRCX: End receiving metadata file: opcode EDIT_RES_PROP (1.1.1032426228) 2017-02-28 02:50:43.819 DMON: Entered rfm_get_chief_lock() for EDIT_RES_PROP, reason 1 2017-02-28 02:50:43.820 00000000 1032426228 DMON: chief lock convert for metadata resynchronization 2017-02-28 02:50:43.820 INSV: Received message for inter-instance publication 2017-02-28 02:50:43.822 req ID 1.1.1032426228, opcode EDIT_RES_PROP, phase RESYNCH, flags 8005 2017-02-28 02:50:43.822 00000000 1032426228 DMON: status from posting standby instances for RESYNCH = ORA-00000 2017-02-28 02:50:43.822 DMON: Metadata available (1.1.1032426228), loading from "/oracle/app/oracle/product/11.2.0/dbhome_1/dbs/dr2ORCLDG.dat" 2017-02-28 02:50:43.823 Executing SQL [ALTER SYSTEM REGISTER] 2017-02-28 02:50:43.826 SQL [ALTER SYSTEM REGISTER] Executed successfully 2017-02-28 02:50:43.832 DMON: ...committing load to memory, Seq.MIV = 0.6 2017-02-28 02:50:43.843 Executing SQL [ALTER SYSTEM SET fal_server='orcl'] 2017-02-28 02:50:43.854 SQL [ALTER SYSTEM SET fal_server='orcl'] Executed successfully 2017-02-28 02:50:43.854 DMON: Messaged RSM0 to reset FAL 2017-02-28 02:50:43.855 DMON: Checking critical status of this database. 2017-02-28 02:50:43.855 INSV: Reply received for message with 2017-02-28 02:50:43.855 req ID 1.1.1032426228, opcode EDIT_RES_PROP, phase RESYNCH 2017-02-28 02:50:43.856 00000000 1032426228 DMON: Entered rfm_release_chief_lock() for EDIT_RES_PROP 2017-02-28 02:50:43.914 NSV0: Failed to connect to remote database orcl. Error is ORA-12514 2017-02-28 02:50:43.914 NSV0: Failed to send message to site orcl. Error code is ORA-12514. 2017-02-28 02:50:43.914 DMON: Database orcl returned ORA-12514 2017-02-28 02:50:43.915 for opcode = EDIT_RES_PROP, phase = RESYNCH, req_id = 1.1.1032426228 2017-02-28 02:51:00.031 drcx: could not find task req_id=1.1.1032426228 for PROBE. 2017-02-28 02:51:00.145 NSV0: Failed to connect to remote database orcl. Error is ORA-12514 2017-02-28 02:51:00.145 NSV0: Failed to send message to site orcl. Error code is ORA-12514. 2017-02-28 02:51:00.145 DMON: Database orcl returned ORA-12514 2017-02-28 02:51:00.146 for opcode = HEALTH_CHECK, phase = BEGIN, req_id = 1.1.1032426229 2017-02-28 02:51:30.104 drcx: could not find task req_id=1.1.1032426229 for PROBE. ``` 主库的 listener.ora ``` SID_LIST_LISTENER= (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = orcl) (ORACLE_HOME = /oracle/app/oracle/product/11.2.0/dbhome_1) (SID_NAME = orcl) ) (SID_DESC = (GLOBAL_DBNAME = orcl_DGMGRL) (ORACLE_HOME = /oracle/app/oracle/product/11.2.0/dbhome_1) (SID_NAME = orcl) ) ) ``` 副库的 listener.ora ``` SID_LIST_LISTENER= (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = orcldg) (ORACLE_HOME = /oracle/app/oracle/product/11.2.0/dbhome_1) (SID_NAME = orcl) ) (SID_DESC = (GLOBAL_DBNAME = orcldg_DGMGRL) (ORACLE_HOME = /oracle/app/oracle/product/11.2.0/dbhome_1) (SID_NAME = orcl) ) ) ``` 主库监听的状态: ``` Services Summary... Service "orcl" has 2 instance(s). Instance "orcl", status UNKNOWN, has 1 handler(s) for this service... Instance "orcl", status READY, has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orcl_DGB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orcl_DGMGRL" has 1 instance(s). Instance "orcl", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully ``` 副库的监听状态: ``` Services Summary... Service "ORCLDG_DGB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orcldg" has 2 instance(s). Instance "orcl", status UNKNOWN, has 1 handler(s) for this service... Instance "orcl", status READY, has 1 handler(s) for this service... Service "orcldg_DGMGRL" has 1 instance(s). Instance "orcl", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully ```

maven新建项目失败,求解

C:\Users\chen>mvn archetype:generate [INFO] Scanning for projects... Downloading: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-clean-plugin:2.5: Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-clean-plugin:jar:2.5 Downloading: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-install-plugin/2.4/maven-install-plugin-2.4.pom [WARNING] Failed to retrieve plugin descriptor for org.apache.maven.plugins:maven-install-plugin:2.4: Plugin org.apache.maven.plugins:maven-install-plugin:2.4 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-install-plugin:jar:2.4 Downloading: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-deploy-plugin/2.7/maven-deploy-plugin-2.7.pom

PHP会话超过2页

<div class="post-text" itemprop="text"> <p>i have the following code:</p> <pre><code>if (!isset($_SESSION)) { ob_start(); } $_SESSION['views'] = 1; // store session data echo $_SESSION['views']; //retrieve data </code></pre> <p>i tried to break it up in 2 parts like this:</p> <pre><code>//Page 1 if (!isset($_SESSION)) { ob_start(); } $_SESSION['views'] = 1; // store session data </code></pre> <p>.</p> <pre><code>//page 2 echo $_SESSION['views']; //retrieve data </code></pre> <p>it echos nothing, what am I doing wrong?</p> </div>

Maven 项目打包上传服务器的时候出现问题 急急急!

![图片说明](https://img-ask.csdn.net/upload/201603/18/1458300160_834217.png) 用下面的命令传到服务器的时候出现了下面的错误,求大神指导 clean package deploy 。。。。。。 [INFO] --- maven-install-plugin:2.4:install (default-install) @ DEMO --- [INFO] Installing E:\WORK\DEMO\target\DEMO.jar to C:\Users\Administrator\.m2\repository\com\cmcc\dsp\open\client\DEMO\0.0.5-SNAPSHOT\DEMO-0.0.5-SNAPSHOT.jar [INFO] Installing E:\WORK\DEMO\pom.xml to C:\Users\Administrator\.m2\repository\com\cmcc\dsp\open\client\DEMO\0.0.5-SNAPSHOT\DEMO-0.0.5-SNAPSHOT.pom [INFO] Installing E:\WORK\DEMO\target\DEMO-jar-with-dependencies.jar to C:\Users\Administrator\.m2\repository\com\cmcc\dsp\open\client\DEMO\0.0.5-SNAPSHOT\DEMO-0.0.5-SNAPSHOT-jar-with-dependencies.jar [INFO] [INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ DEMO --- [INFO] Downloading: http://XXXX/nexus/content/repositories/cxb/com/cmcc/dsp/open/client/DEMO/0.0.5-SNAPSHOT/maven-metadata.xml [INFO] Uploading: http://XXXX/nexus/content/repositories/cxb/com/cmcc/dsp/open/client/DEMO/0.0.5-SNAPSHOT/DEMO-0.0.5-20160318.111746-1.jar [INFO] Uploading: http://XXXX/nexus/content/repositories/cxb/com/cmcc/dsp/open/client/DEMO/0.0.5-SNAPSHOT/DEMO-0.0.5-20160318.111746-1.pom [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 8.572 s [INFO] Finished at: 2016-03-18T19:17:46+08:00 [INFO] Final Memory: 27M/287M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project DEMO: Failed to deploy artifacts: Could not transfer artifact com.cmcc.dsp.open.client:DEMO:jar:0.0.5-20160318.111746-1 from/to local-nexus (http://XXXX/nexus/content/repositories/cxb/): Failed to transfer http://XXXX/nexus/content/repositories/cxb/com/cmcc/dsp/open/client/DEMO/0.0.5-SNAPSHOT/DEMO-0.0.5-20160318.111746-1.jar. Error code 400, Bad Request -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException POM.xml 中配置了: <distributionManagement> <repository> <id>local-nexus</id> <url>http://XXX/nexus/content/repositories/cxb/</url> </repository> </distributionManagement> settings中配置了密码: <servers> <server> <id>public-nexus</id> <username>XXX</username> <password>YYYYY</password> </server> <server> <id>local-nexus</id> <username>XXX</username> <password>YYYYY</password> </server> </servers> 感觉没有什么问题,请大神指导一下

法兰绒错误无法初始化网络无法打开tun设备

<div class="post-text" itemprop="text"> <p>I'm using flannel for the first time and I'm testing it with two nodes (Ubuntu14.04) with etcd running. The ip addresses of the two nodes are 192.168.0.124 (node1) and 192.168.0.127 (node2), and they can ping each other without problem. I have configured flannel network in etcd as follows:<br> On node1: <code>etcdctl set /coreos.com/network/config '{ "Network": "10.1.15.0/16" }'</code><br> On node2: <code>etcdctl set /coreos.com/network/config '{ "Network": "10.1.20.0/16" }'</code> However, whenever I run flanneld on either node I get the following error:<br></p> <blockquote> <p>E0523 14:11:29.325240 28803 network.go:71] Failed to initialize network (type UDP): Failed to open TUN device: ioctl failed with 'operation not permitted'</p> </blockquote> <p>Is there any clue on solving this problem?</p> </div>

是否可以在用户注册和登录时仅使用预准备语句

<div class="post-text" itemprop="text"> <p>I am creating an android app that has a login page and registration page. After logging in the users will have to only press buttons to retrieve data from a database just like to display the data</p> <p>Now my question is, Would it be possible to only use prepared statements when logging in or user registration to lessen the chance of SQL injection. While when retrieving data from pressing buttons will only use CRUD operations?</p> </div>

HTTPS GET请求https://www.googleapis.com/plus/v1/people/me

<div class="post-text" itemprop="text"> <p>ok, here's a snippet of my code</p> <pre><code>$client = new Google_Client(); $client-&gt;setScopes(array('https://www.googleapis.com/auth/plus.me','https://www.googleapis.com/auth/userinfo.email')); $plus = new Google_PlusService($client); if (isset($_SESSION['token'])) { $client-&gt;setAccessToken($_SESSION['token']); } else { $_SESSION['auth_token'] = $client-&gt;authenticate(); } </code></pre> <p>and now I've got the $_SESSION['auth_token'] which contains something like </p> <pre><code>{"access_token":"ya29.AHESblablablabalbalbalbagTUI4-MfZvwtb6A","token_type":"Bearer","expires_in":3600,"id_token":"blablablabalbalbalba.eyJpc3MiOiJhY2NvdW50cy5nb29nbGUuY29tIiwiY2lkIjoiNzAzNTg5ODA3MjMxLmFwcHMuZ29vZ2xldXNlcmNvbnRlbnQuY29tIiwiYXpwIjoiNzAzNTg5ODA3MjMxLmFwcHMuZ29vZ2xldXNlcmNvbnRlbnQuY29tIiwiYXVkIjoiNzAzNTgblablablabalbalbalbaRlbnQuY29tIiwiaGQiOiJnZW9mbGV4LmNvbS5teSIsInZlcmlmaWVkX2VtYWlsIjoidHJ1ZSIsImVtYWlsX3blablablabalbalbalbaFzei00N25nN3VoLU5NUmh3IiwiYXRfaGFzaCI6IllEeC13UXN6LTQ3bmc3dWgtTk1SaHciLCJpYXQiOjEzNzQ1NjA1NDcsImV4cCI6MTM3NDU2NDQ0N30.ek1FX-pY8BdaI8sDhA0rwmiIDyBV6pHPvBi-l7BxDJCR-YoDQGFN8y-kA1xg-vmoReiVCwhvQhHK__jc-xFSLfJDLbwk-UGyRKu_O7ZCaSviP7WupexIYvLGCyvgLBdjG3K7UIcq0o3L15mQPvQJr9LA7325gwEOZ12Pc1lAHKw","refresh_token":"1\/Y7oilVvUUutfkKMDfblablablabalbalbalbaE","created":1374560847} </code></pre> <p>my question is, how do I get the user's email? i know you can just call</p> <pre><code>https://www.googleapis.com/oauth2/v1/userinfo?alt=json&amp;access_token=youraccess_token </code></pre> <p>but is that the 'official' way to get the users's email? because from here <a href="https://developers.google.com/accounts/docs/OAuth2Login#exchangecode" rel="nofollow">https://developers.google.com/accounts/docs/OAuth2Login#exchangecode</a> it says</p> <p>The Access Token that your application received during the authentication flow with Google can be used to obtain additional profile information about the user. Please be advised that Google is modifying the behavior of the different endpoints to simplify their usage. We currently recommend the following based on existing behavior. If you are using Login with Google+, you should always retrieve user profile information from the People API by sending an HTTPS GET request to <a href="https://www.googleapis.com/plus/v1/people/me" rel="nofollow">https://www.googleapis.com/plus/v1/people/me</a> using your Access Token to authenticate the request.</p> <p>So i guess the best way is to perform HTTPS GET request to googleapis.com.. but I don't know how to do it. Can anyone lead the way?</p> </div>

CDH 集群安装kafka和spark2后重启后报错 Connection refused to https://archive.cloudera.com/cdh5/parcels/5.16/manifest.json

CDH 集群安装kafka和spark2后重启后报错如下: 020-05-12 18:33:55,318 ERROR ParcelUpdateService:com.cloudera.parcel.components.ParcelDownloaderImpl: Unable to retrieve remote parcel repository manifest java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused to https://archive.cloudera.com/cdh5/parcels/5.16/manifest.json at com.ning.http.client.providers.netty.NettyResponseFuture.abort(NettyResponseFuture.java:297) at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:104) at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:399) at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:390) at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:352) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:409) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused to https://archive.cloudera.com/cdh5/parcels/5.16/manifest.json at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100) ... 11 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:404) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282) ... 3 more 求大神指点

小弟初来乍到。。。刚刚接触cenots,想安装mysql,但是安装完成之后报错。。求大神指点迷津

[root@localhost etc]# rpm -qa | grep mysql mysql-community-embedded-5.7.19-1.el7.x86_64 mysql-community-client-5.7.19-1.el7.x86_64 mysql-community-libs-5.7.19-1.el7.x86_64 mysql-community-minimal-debuginfo-5.7.19-1.el7.x86_64 mysql-community-embedded-devel-5.7.19-1.el7.x86_64 mysql-community-common-5.7.19-1.el7.x86_64 mysql-community-devel-5.7.19-1.el7.x86_64 mysql-community-embedded-compat-5.7.19-1.el7.x86_64 mysql-community-server-5.7.19-1.el7.x86_64 mysql-community-libs-compat-5.7.19-1.el7.x86_64 这个意思应该安装成功了吧,但是我用server mysql start启动时,提示Redirecting to /bin/systemctl start mysql.service Failed to issue method call: Unit mysql.service failed to load: No such file or directory. 百度了好久解决方法都没有成功。。。。求指点(本人刚接触linux,很多不懂,有说的不好的地方希望各位大神包含)

调用url检索列表

<div class="post-text" itemprop="text"> <p>I have a question. I need to get data from this url <code>/wd/biling.php?get&amp;amp;id=1</code>.I now that this url return the liste <code>serialize</code>. I tried to retrieve by using <code>$data= file_get_contents('/wd/biling.php?get&amp;amp;id=1');</code> but I have the error : <code>Failed to open stream</code>. Can you help me please ?</p> </div>

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Linux 会成为主流桌面操作系统吗?

整理 |屠敏出品 | CSDN(ID:CSDNnews)2020 年 1 月 14 日,微软正式停止了 Windows 7 系统的扩展支持,这意味着服役十年的 Windows 7,属于...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

学习总结之HTML5剑指前端(建议收藏,图文并茂)

前言学习《HTML5与CSS3权威指南》这本书很不错,学完之后我颇有感触,觉得web的世界开明了许多。这本书是需要有一定基础的web前端开发工程师。这本书主要学习HTML5和css3,看...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

新一代神器STM32CubeMonitor介绍、下载、安装和使用教程

关注、星标公众号,不错过精彩内容作者:黄工公众号:strongerHuang最近ST官网悄悄新上线了一款比较强大的工具:STM32CubeMonitor V1.0.0。经过我研究和使用之...

记一次腾讯面试,我挂在了最熟悉不过的队列上……

腾讯后台面试,面试官问:如何自己实现队列?

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

冒泡排序动画(基于python pygame实现)

本项目效果初始截图如下 动画见本人b站投稿:https://www.bilibili.com/video/av95491382 本项目对应github地址:https://github.com/BigShuang python版本:3.6,pygame版本:1.9.3。(python版本一致应该就没什么问题) 样例gif如下 ======================= 大爽歌作,mad

Redis核心原理与应用实践

Redis核心原理与应用实践 在很多场景下都会使用Redis,但是到了深层次的时候就了解的不是那么深刻,以至于在面试的时候经常会遇到卡壳的现象,学习知识要做到系统和深入,不要把Redis想象的过于复杂,和Mysql一样,是个读取数据的软件。 有一个理解是Redis是key value缓存服务器,更多的优点在于对value的操作更加丰富。 安装 yum install redis #yum安装 b...

现代的 “Hello, World”,可不仅仅是几行代码而已

作者 |Charles R. Martin译者 | 弯月,责编 | 夕颜头图 |付费下载自视觉中国出品 | CSDN(ID:CSDNnews)新手...

带了6个月的徒弟当了面试官,而身为高级工程师的我天天修Bug......

即将毕业的应届毕业生一枚,现在只拿到了两家offer,但最近听到一些消息,其中一个offer,我这个组据说客户很少,很有可能整组被裁掉。 想问大家: 如果我刚入职这个组就被裁了怎么办呢? 大家都是什么时候知道自己要被裁了的? 面试软技能指导: BQ/Project/Resume 试听内容: 除了刷题,还有哪些技能是拿到offer不可或缺的要素 如何提升面试软实力:简历, 行为面试,沟通能...

!大部分程序员只会写3年代码

如果世界上都是这种不思进取的软件公司,那别说大部分程序员只会写 3 年代码,恐怕就没有程序员这种职业。

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

HTTP与HTTPS的区别

面试官问HTTP与HTTPS的区别,我这样回答让他竖起大拇指!

程序员毕业去大公司好还是小公司好?

虽然大公司并不是人人都能进,但我仍建议还未毕业的同学,尽力地通过校招向大公司挤,但凡挤进去,你这一生会容易很多。 大公司哪里好?没能进大公司怎么办?答案都在这里了,记得帮我点赞哦。 目录: 技术氛围 内部晋升与跳槽 啥也没学会,公司倒闭了? 不同的人脉圈,注定会有不同的结果 没能去大厂怎么办? 一、技术氛围 纵观整个程序员技术领域,哪个在行业有所名气的大牛,不是在大厂? 而且众所...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

立即提问
相关内容推荐