jdk中的jre/lib/security目录下为啥要导入证书 5C

之前项目中用httpclient调用第三方接口,走的是https,之前正常运行。后来测试把测试环境的jdk换成了openjdk,后运行报错
报错信息如下

java.lang.RuntimeException: javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty

后来排查发现之前jdk的jre/lib/sercurity目录下有cer证书,而新安装的openjdk目录下没有证书,后来把jdk目录下证书拷贝到openjdk目录下,就好了。
想确认下这个证书的作用是什么。是不是可以认为就好比浏览器访问https需要导入证书。

2个回答

因为不同的 jdk 工具实现的加密工具的程度不同,可能提供的加密工具包也不一样。
比如 windows 和 Linux 相同版本的 jdk 的 https 加密算法包和策略也是不一样的。
楼主所说的证书中包含了 https 通信过程中需要的认证配置信息,可能这个第三方接口有自己的加密要求,且有自己发布的证书,所以需要导入。
有些 https 访问其实使用了默认的 https 证书,所以不需要再导入的吧。

HTTPS在HTTP的基础上加入了SSL协议,SSL依靠证书来验证服务器的身份

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
jdk1.8 出现unsupported major.minor version 52.0
环境变量配置jdk1.8 yjb@user:~/IdeaProjects/First$ java -version java version "1.8.0_73" Java(TM) SE Runtime Environment (build 1.8.0_73-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.73-b02, mixed mode) yjb@user:~/IdeaProjects/First$ 通过命令行可以运行程序,但是在idea下运行出现unsupported major.minor version 52.0 /home/yjb/Sofware/jdk1.8.0_73/bin/java -Didea.launcher.port=7532 -Didea.launcher.bin.path=/home/yjb/Sofware/idea-IC-162.1121.32/bin -Dfile.encoding=UTF-8 -classpath /home/yjb/Sofware/jdk1.8.0_73/jre/lib/charsets.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/deploy.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/cldrdata.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/dnsns.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/jaccess.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/jfxrt.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/localedata.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/nashorn.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/sunec.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/sunjce_provider.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/sunpkcs11.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/ext/zipfs.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/javaws.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/jce.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/jfr.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/jfxswt.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/jsse.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/management-agent.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/plugin.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/resources.jar:/home/yjb/Sofware/jdk1.8.0_73/jre/lib/rt.jar:/home/yjb/IdeaProjects/First/out/production/First:/home/yjb/IdeaProjects/First/tmVar333.jar:/home/yjb/IdeaProjects/First/tmVar.jar:/home/yjb/IdeaProjects/First/tmVar22.jar:/home/yjb/IdeaProjects/First/GNormPlus.jar:/home/yjb/IdeaProjects/First/tmVarJava.jar:/home/yjb/Sofware/idea-IC-162.1121.32/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain Try null Exception in thread "main" java.lang.UnsupportedClassVersionError: tmVarlib/tmVar : Unsupported major.minor version 52.0 null at java.lang.ClassLoader.defineClass1(Native Method) null at java.lang.ClassLoader.defineClass(ClassLoader.java:803) null at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) null at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) null at java.net.URLClassLoader.access$100(URLClassLoader.java:71) null at java.net.URLClassLoader$1.run(URLClassLoader.java:361) null at java.net.URLClassLoader$1.run(URLClassLoader.java:355) null at java.security.AccessController.doPrivileged(Native Method) null at java.net.URLClassLoader.findClass(URLClassLoader.java:354) null at java.lang.ClassLoader.loadClass(ClassLoader.java:425) null at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) null at java.lang.ClassLoader.loadClass(ClassLoader.java:358) null at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482) end error!!!!!!!!!!!!!
编译jdk1.7时发生错误:Unsupported major.minor version 52.0
今天试着编译jdk1.7。make sanity都check passed了,可是在make时,就发生这个错误: make[2]: Entering directory '/home/fengli/Downloads/MyOpenJdk7/langtools/make' JAVA_HOME=/usr/lib/jdk1.6.0_32 ANT_OPTS=-Djava.io.tmpdir='/home/fengli/Downloads/MyOpenJdk7/build/langtools/build/ant-tmp' /usr/lib/apache-ant-1.10.3/bin/ant -Djdk.version=1.7.0 -Dfull.version='1.7.0-internal-root_2018_11_06_19_55-b00' -Dmilestone=internal -Dbuild.number=b00 -Djavac.target=7 -Djavac.source=7 -Dboot.java.home=/usr/lib/jdk1.6.0_32 -Dimport.jdk=/home/fengli/Downloads/MyOpenJdk7/jdk -Dbuild.dir=/home/fengli/Downloads/MyOpenJdk7/build/langtools/build -Ddist.dir=/home/fengli/Downloads/MyOpenJdk7/build/langtools/dist build Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/tools/ant/launch/Launcher : Unsupported major.minor version 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: org.apache.tools.ant.launch.Launcher. Program will exit. Makefile:201: recipe for target 'build' failed make[2]: *** [build] Error 1 make[2]: Leaving directory '/home/fengli/Downloads/MyOpenJdk7/langtools/make' make/langtools-rules.gmk:37: recipe for target 'langtools-build' failed make[1]: *** [langtools-build] Error 2 make[1]: Leaving directory '/home/fengli/Downloads/MyOpenJdk7' Makefile:244: recipe for target 'build_product_image' failed make: *** [build_product_image] Error 2 我/etc/profile是这样的: # /etc/profile: system-wide .profile file for the Bourne shell (sh(1)) # and Bourne compatible shells (bash(1), ksh(1), ash(1), ...). if [ "$PS1" ]; then if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then # The file bash.bashrc already sets the default PS1. # PS1='\h:\w\$ ' if [ -f /etc/bash.bashrc ]; then . /etc/bash.bashrc fi else if [ "`id -u`" -eq 0 ]; then PS1='# ' else PS1='$ ' fi fi fi if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then . $i fi done unset i fi # jdk1.8 #export JAVA_HOME=/usr/lib/jdk1.8.0_191 #export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH #export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH # jdk1.7 #export JAVA_HOME=/usr/lib/jdk1.7.0_80 #export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH #export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH # jdk1.6 export JAVA_HOME=/usr/lib/jdk1.6.0_32 export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH #set Ant enviroment export ANT_HOME=/usr/lib/apache-ant-1.10.3 export PATH=$PATH:$ANT_HOME/bin # for jdk7 compile export LANG=C #BootStrap-JDK export ALT_BOOTDIR=/usr/lib/jdk1.6.0_32 #OracleJDK export ALT_JDK_IMPORT_PATH=/usr/lib/jdk1.6.0_32 # export ALLOW_DOWNLOADS=true #the same with you cup cores export HOTSPOT_BUILD_JOBS=2 export ALT_PARALLEL_COMPILE_JOBS=2 # export USE_PRECOMPILED_HEADER=true # export BUILD_LANGTOOLS=true export BUILD_JAXWS=false #export BUILD_JAXP=false #export BUILD_CORBA=false export BUILD_HOTSPOT=true export BUILD_JDK=true export SKIP_COMPARE_IMAGES=true # BUILD_DEPLOY=false # BUILD_INSTALL=false # export ALT_OUTPUTDIR=/home/fengli/Downloads/MyOpenJdk7/build # unset JAVA_HOME unset CLASSPATH # make 2>&1 | tee $ALT_OUTPUTDIR/build.log
jboss7.1.1下JCE cannot authenticate the provider BC
1、我的代码中引用了bcprov-jdk16-146.jar,在部署到jboss7.1.1时报如下异常: Caused by: java.lang.SecurityException: JCE cannot authenticate the provider BC at javax.crypto.SunJCE_b.a(DashoA13*..) at javax.crypto.SecretKeyFactory.getInstance(DashoA13*..) at org.jasypt.encryption.pbe.StandardPBEByteEncryptor.initialize(StandardPBEByteEncryptor.java:667) ... 95 more Caused by: java.util.jar.JarException: Cannot parse jar:file:/Users/himajumdar/Work/jboss/jboss-5.1.0.GA/server/default/tmp/a00c-b5p2h4-hgi84jo1-1-hgi84qe5-w/csa.ear!/lib/bcprov-jdk16-146 at javax.crypto.SunJCE_c.a(DashoA13*..) at javax.crypto.SunJCE_b.b(DashoA13*..) at javax.crypto.SunJCE_b.a(DashoA13*..) ... 98 more 2、然后按照网上说的我在JAVA_HOME/jre/lib/security/java.security下增加 security.provider.11=org.bouncycastle.jce.provider.BouncyCastleProvider to java.security 将该jar包放到JAVA_HOME/jre/lib/ext下,并将项目中的该jar去掉 结果又报出如下问题: ERROR java.lang.ClassNotFoundException: org.bouncycastle.jce.provider.BouncyCastleProvider at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:190) at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:468) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:456) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:423) 哪位大侠能给解决下呢
用gradle导入spring源码时报错
一开始导入时报错 C:\Program Files\Java\jdk1.8.0_201\jre\lib\security not found 后来我生成了一个证书放进去,之后又报 ![图片说明](https://img-ask.csdn.net/upload/201909/02/1567390063_124106.jpg) ![图片说明](https://img-ask.csdn.net/upload/201909/02/1567390105_173652.jpg)
Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error,同时无法put文件到hdfs
hadoop版本是3.1,ubuntu是18, 问题一:浏览hdfs目录显示: Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error 问题二: namenode的log如下: ``` 438 WARN org.eclipse.jetty.servlet.ServletHandler: Error for /webhdfs/v1/ java.lang.NoClassDefFoundError: javax/activation/DataSource at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:457) at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65) at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133) at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85) at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156) at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473) at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319) at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:236) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:186) at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:146) at javax.xml.bind.ContextFinder.find(ContextFinder.java:350) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:446) at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:409) at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.<init>(WadlApplicationContextImpl.java:103) at com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:100) at com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169) at com.sun.jersey.server.impl.application.RootResourceUriRules.<init>(RootResourceUriRules.java:106) at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1359) at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180) at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799) at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795) at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795) at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:790) at com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:509) at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339) at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605) at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207) at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394) at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:577) at javax.servlet.GenericServlet.init(GenericServlet.java:244) at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:643) at org.eclipse.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:499) at org.eclipse.jetty.servlet.ServletHolder.ensureInstance(ServletHolder.java:791) at org.eclipse.jetty.servlet.ServletHolder.prepare(ServletHolder.java:776) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:579) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ... 65 more 2019-06-18 15:35:01,950 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/ java.lang.NullPointerException at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189) at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) 2019-06-18 15:39:17,698 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 56 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 22 2019-06-18 15:39:25,202 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/ java.lang.NullPointerException at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189) at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) 2019-06-18 15:39:45,858 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/ java.lang.NullPointerException at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189) at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:539) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.base/java.lang.Thread.run(Thread.java:834) ``` 附datanode日志: 2019-06-18 14:52:36,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = gx-virtual-machine/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.2.0 STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-kms-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-text-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-2.9.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.0.jar STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 2019-01-08T06:08Z STARTUP_MSG: java = 11.0.3 ************************************************************/ 2019-06-18 14:52:36,863 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 2019-06-18 14:52:41,503 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:42,424 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2019-06-18 14:52:44,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2019-06-18 14:52:46,504 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2019-06-18 14:52:46,511 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576 2019-06-18 14:52:46,566 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is gx-virtual-machine 2019-06-18 14:52:46,567 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2019-06-18 14:52:46,592 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0 2019-06-18 14:52:46,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:9866 2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwidth is 10485760 bytes/s 2019-06-18 14:52:46,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 50 2019-06-18 14:52:47,198 INFO org.eclipse.jetty.util.log: Logging initialized @15269ms 2019-06-18 14:52:48,022 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2019-06-18 14:52:48,062 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined 2019-06-18 14:52:48,161 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2019-06-18 14:52:48,174 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2019-06-18 14:52:48,556 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 44121 2019-06-18 14:52:48,580 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-06T01:11:56+08:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827 2019-06-18 14:52:49,011 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7876d598{/logs,file:///usr/local/hadoop/logs/,AVAILABLE} 2019-06-18 14:52:49,018 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@5af28b27{/static,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2019-06-18 14:52:50,151 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@547e29a4{/,file:///usr/local/hadoop/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{/datanode} 2019-06-18 14:52:50,242 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@6f45a1a0{HTTP/1.1,[http/1.1]}{localhost:44121} 2019-06-18 14:52:50,243 INFO org.eclipse.jetty.server.Server: Started @18329ms 2019-06-18 14:52:52,165 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:9864 2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hadoop 2019-06-18 14:52:52,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup 2019-06-18 14:52:52,242 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor 2019-06-18 14:52:52,720 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 2019-06-18 14:52:52,880 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9867 2019-06-18 14:52:54,839 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:9867 2019-06-18 14:52:55,160 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null 2019-06-18 14:52:55,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default> 2019-06-18 14:52:55,418 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service 2019-06-18 14:52:55,532 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2019-06-18 14:52:55,561 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9867: starting 2019-06-18 14:52:58,314 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 2019-06-18 14:52:58,329 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1) 2019-06-18 14:52:58,458 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 55815@gx-virtual-machine 2019-06-18 14:52:58,478 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory with location [DISK]file:/usr/local/hadoop/tmp/dfs/data is not formatted for namespace 317473294. Formatting... 2019-06-18 14:52:58,479 INFO org.apache.hadoop.hdfs.server.common.Storage: Generated new storageID DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e for directory /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:58,749 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:52:58,750 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Block pool storage directory for location [DISK]file:/usr/local/hadoop/tmp/dfs/data and block pool id BP-200946205-127.0.1.1-1560840480894 is not formatted. Formatting ... 2019-06-18 14:52:58,753 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-200946205-127.0.1.1-1560840480894 directory /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current 2019-06-18 14:52:58,772 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=317473294;bpid=BP-200946205-127.0.1.1-1560840480894;lv=-57;nsInfo=lv=-65;cid=CID-eb45654d-0bc6-4348-b02f-e03603e1ae37;nsid=317473294;c=1560840480894;bpid=BP-200946205-127.0.1.1-1560840480894;dnuuid=null 2019-06-18 14:52:58,776 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6a2049c6-1a18-437a-97bd-51c5bb65a639 2019-06-18 14:52:59,549 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e 2019-06-18 14:52:59,553 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - [DISK]file:/usr/local/hadoop/tmp/dfs/data, StorageType: DISK 2019-06-18 14:52:59,615 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean 2019-06-18 14:52:59,680 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:59,801 INFO org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker: Scheduled health check for volume /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:52:59,809 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:52:59,839 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data... 2019-06-18 14:53:00,166 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-200946205-127.0.1.1-1560840480894 on /usr/local/hadoop/tmp/dfs/data: 327ms 2019-06-18 14:53:00,168 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-200946205-127.0.1.1-1560840480894: 359ms 2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data... 2019-06-18 14:53:00,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice: Replica Cache file: /usr/local/hadoop/tmp/dfs/data/current/BP-200946205-127.0.1.1-1560840480894/current/replicas doesn't exist 2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data: 17ms 2019-06-18 14:53:00,198 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-200946205-127.0.1.1-1560840480894: 27ms 2019-06-18 14:53:00,208 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-200946205-127.0.1.1-1560840480894 on volume /usr/local/hadoop/tmp/dfs/data 2019-06-18 14:53:00,221 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): finished scanning block pool BP-200946205-127.0.1.1-1560840480894 2019-06-18 14:53:00,401 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/hadoop/tmp/dfs/data, DS-8b3e1e6d-135a-433a-93bb-3e62598daf5e): no suitable block pools found to scan. Waiting 1814399799 ms. 2019-06-18 14:53:00,418 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 2019/6/18 下午8:05 with interval of 21600000ms 2019-06-18 14:53:00,463 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 beginning handshake with NN 2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-200946205-127.0.1.1-1560840480894 (Datanode Uuid 6a2049c6-1a18-437a-97bd-51c5bb65a639) service to localhost/127.0.0.1:9000 successfully registered with NN 2019-06-18 14:53:00,825 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/127.0.0.1:9000 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000 2019-06-18 14:53:01,524 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb210af820fa10abf, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 19 msec to generate and 231 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2019-06-18 14:53:01,525 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-200946205-127.0.1.1-1560840480894 2019-06-18 15:44:37,567 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001 src: /127.0.0.1:34774 dest: /127.0.0.1:9866 2019-06-18 15:44:37,733 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34774, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, duration(ns): 75831098 2019-06-18 15:44:37,737 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,256 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002 src: /127.0.0.1:34776 dest: /127.0.0.1:9866 2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34776, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, duration(ns): 5252820 2019-06-18 15:44:38,266 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,340 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003 src: /127.0.0.1:34778 dest: /127.0.0.1:9866 2019-06-18 15:44:38,365 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34778, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, duration(ns): 19816531 2019-06-18 15:44:38,372 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,428 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004 src: /127.0.0.1:34780 dest: /127.0.0.1:9866 2019-06-18 15:44:38,455 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34780, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, duration(ns): 9820674 2019-06-18 15:44:38,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741828_1004, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005 src: /127.0.0.1:34782 dest: /127.0.0.1:9866 2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34782, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, duration(ns): 9424051 2019-06-18 15:44:38,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741829_1005, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,569 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006 src: /127.0.0.1:34784 dest: /127.0.0.1:9866 2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34784, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, duration(ns): 6662498 2019-06-18 15:44:38,579 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741830_1006, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,642 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007 src: /127.0.0.1:34786 dest: /127.0.0.1:9866 2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34786, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, duration(ns): 5047916 2019-06-18 15:44:38,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741831_1007, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,713 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008 src: /127.0.0.1:34788 dest: /127.0.0.1:9866 2019-06-18 15:44:38,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34788, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, duration(ns): 8532382 2019-06-18 15:44:38,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741832_1008, type=LAST_IN_PIPELINE terminating 2019-06-18 15:44:38,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009 src: /127.0.0.1:34790 dest: /127.0.0.1:9866 2019-06-18 15:44:38,807 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:34790, dest: /127.0.0.1:9866, bytes: 690, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1864191814_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, duration(ns): 5589094 2019-06-18 15:44:38,813 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741833_1009, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:01,961 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010 src: /127.0.0.1:36578 dest: /127.0.0.1:9866 2019-06-19 09:54:02,003 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36578, dest: /127.0.0.1:9866, bytes: 8260, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, duration(ns): 32739756 2019-06-19 09:54:02,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741834_1010, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,125 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011 src: /127.0.0.1:36580 dest: /127.0.0.1:9866 2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36580, dest: /127.0.0.1:9866, bytes: 953, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, duration(ns): 12137675 2019-06-19 09:54:02,154 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741835_1011, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012 src: /127.0.0.1:36582 dest: /127.0.0.1:9866 2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36582, dest: /127.0.0.1:9866, bytes: 11392, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, duration(ns): 8740891 2019-06-19 09:54:02,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741836_1012, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,307 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013 src: /127.0.0.1:36584 dest: /127.0.0.1:9866 2019-06-19 09:54:02,322 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36584, dest: /127.0.0.1:9866, bytes: 1061, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, duration(ns): 8680367 2019-06-19 09:54:02,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741837_1013, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014 src: /127.0.0.1:36586 dest: /127.0.0.1:9866 2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36586, dest: /127.0.0.1:9866, bytes: 620, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, duration(ns): 8474258 2019-06-19 09:54:02,413 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741838_1014, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015 src: /127.0.0.1:36588 dest: /127.0.0.1:9866 2019-06-19 09:54:02,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36588, dest: /127.0.0.1:9866, bytes: 3518, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, duration(ns): 6946259 2019-06-19 09:54:02,503 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741839_1015, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016 src: /127.0.0.1:36590 dest: /127.0.0.1:9866 2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36590, dest: /127.0.0.1:9866, bytes: 682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, duration(ns): 6602106 2019-06-19 09:54:02,571 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-200946205-127.0.1.1-1560840480894:blk_1073741840_1016, type=LAST_IN_PIPELINE terminating 2019-06-19 09:54:02,635 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017 src: /127.0.0.1:36592 dest: /127.0.0.1:9866 2019-06-19 09:54:02,650 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:36592, dest: /127.0.0.1:9866, bytes: 758, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_853656761_1, offset: 0, srvID: 6a2049c6-1a18-437a-97bd-51c5bb65a639, blockid: BP-200946205-127.0.1.1-1560840480894:blk_1073741841_1017, duration(ns): 9690339 2019-06-19 09:54:02,654 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
Active MQ 在Centos无法启动
如题,启动信息如下,这是什么情况啊 ``` INFO: Loading '/usr/local/activemq/apache-activemq-5.15.10//bin/env' INFO: Using java '/usr/java/jdk1.8.0_221/bin/java' Java Runtime: Oracle Corporation 1.8.0_221 /usr/java/jdk1.8.0_221/jre Heap sizes: current=63360k free=62653k max=1013632k JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/usr/local/activemq/apache-activemq-5.15.10//conf/login.config -Dactivemq.classpath=/usr/local/activemq/apache-activemq-5.15.10//conf:/usr/local/activemq/apache-activemq-5.15.10//../lib/: -Dactivemq.home=/usr/local/activemq/apache-activemq-5.15.10/ -Dactivemq.base=/usr/local/activemq/apache-activemq-5.15.10/ -Dactivemq.conf=/usr/local/activemq/apache-activemq-5.15.10//conf -Dactivemq.data=/usr/local/activemq/apache-activemq-5.15.10//data Extensions classpath: [/usr/local/activemq/apache-activemq-5.15.10/lib,/usr/local/activemq/apache-activemq-5.15.10/lib/camel,/usr/local/activemq/apache-activemq-5.15.10/lib/optional,/usr/local/activemq/apache-activemq-5.15.10/lib/web,/usr/local/activemq/apache-activemq-5.15.10/lib/extra] ACTIVEMQ_HOME: /usr/local/activemq/apache-activemq-5.15.10 ACTIVEMQ_BASE: /usr/local/activemq/apache-activemq-5.15.10 ACTIVEMQ_CONF: /usr/local/activemq/apache-activemq-5.15.10/conf ACTIVEMQ_DATA: /usr/local/activemq/apache-activemq-5.15.10/data Usage: ./activemq [--extdir <dir>] [task] [task-options] [task data] Tasks: browse - Display selected messages in a specified destination. bstat - Performs a predefined query that displays useful statistics regarding the specified broker consumer - Receives messages from the broker create - Creates a runnable broker instance in the specified path. decrypt - Decrypts given text dstat - Performs a predefined query that displays useful tabular statistics regarding the specified destination type encrypt - Encrypts given text export - Exports a stopped brokers data files to an archive file list - Lists all available brokers in the specified JMX context producer - Sends messages to the broker purge - Delete selected destination's messages that matches the message selector query - Display selected broker component's attributes and statistics. start - Creates and starts a broker using a configuration file, or a broker URI. stop - Stops a running broker specified by the broker name. Task Options (Options specific to each task): --extdir <dir> - Add the jar files in the directory to the classpath. --version - Display the version information. -h,-?,--help - Display this help information. To display task specific help, use Main [task] -h,-?,--help Task Data: - Information needed by each specific task. JMX system property options: -Dactivemq.jmx.url=<jmx service uri> (default is: 'service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi') -Dactivemq.jmx.user=<user name> -Dactivemq.jmx.password=<password> Tasks provided by the sysv init script: kill - terminate instance in a drastic way by sending SIGKILL restart - stop running instance (if there is one), start new instance console - start broker in foreground, useful for debugging purposes status - check if activemq process is running Configuration of this script: The configuration of this script is read from the following files: /etc/default/activemq /root/.activemqrc /usr/local/activemq/apache-activemq-5.15.10//bin/env This script searches for the files in the listed order and reads the first available file. Modify /usr/local/activemq/apache-activemq-5.15.10//bin/env or create a copy of that file on a suitable location. To use additional configurations for running multiple instances on the same operating system rename or symlink script to a name matching to activemq-instance-<INSTANCENAME>. This changes the configuration location to /etc/default/activemq-instance-<INSTANCENAME> and $HOME/.activemqrc-instance-<INSTANCENAME>. ```
Linux下执行weblogic的安装、创建和启动命令时都提示已杀死,这该怎么解决
1.启动weblogic服务时,总是提示已杀死。重新创建域或安装,也是提示已杀死 2.执行Linux一些其他命令时,也会提示已杀死 3.Linux版本:Linux oracleserver 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux 4.weblogic版本: 10.3.6.0 5.启动信息:. . JAVA Memory arguments: -Xms256m -Xmx512m -XX:MaxPermSize=256m . WLS Start Mode=Production . CLASSPATH=/root/Oracle/Middleware/patch_wls1036/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/root/Oracle/Middleware/patch_ocp371/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_45/lib/tools.jar:/root/Oracle/Middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/root/Oracle/Middleware/wlserver_10.3/server/lib/weblogic.jar:/root/Oracle/Middleware/modules/features/weblogic.server.modules_10.3.6.0.jar:/root/Oracle/Middleware/wlserver_10.3/server/lib/webservices.jar:/root/Oracle/Middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/root/Oracle/Middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar:/root/Oracle/Middleware/wlserver_10.3/common/derby/lib/derbyclient.jar:/root/Oracle/Middleware/wlserver_10.3/server/lib/xqrl.jar:.:/usr/java/jdk1.6.0_45/lib/dt.jar:/usr/java/jdk1.6.0_45/lib/tools.jar . PATH=/root/Oracle/Middleware/wlserver_10.3/server/bin:/root/Oracle/Middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_45/jre/bin:/usr/java/jdk1.6.0_45/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.6.0_45/bin:/root/bin . *************************************************** * To start WebLogic Server, use a username and * * password assigned to an admin-level user. For * * server administration, use the WebLogic Server * * console at http://hostname:port/console * *************************************************** starting weblogic with Java version: java version "1.6.0_45" Java(TM) SE Runtime Environment (build 1.6.0_45-b06) Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode) Starting WLS with line: /usr/java/jdk1.6.0_45/bin/java -server -Xms256m -Xmx512m -XX:MaxPermSize=256m -Dweblogic.Name=dzpzk_sj_server -Djava.security.policy=/root/Oracle/Middleware/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -da -Dplatform.home=/root/Oracle/Middleware/wlserver_10.3 -Dwls.home=/root/Oracle/Middleware/wlserver_10.3/server -Dweblogic.home=/root/Oracle/Middleware/wlserver_10.3/server -Dweblogic.management.discover=true -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false -Dweblogic.ext.dirs=/root/Oracle/Middleware/patch_wls1036/profiles/default/sysext_manifest_classpath:/root/Oracle/Middleware/patch_ocp371/profiles/default/sysext_manifest_classpath weblogic.Server <2018-5-21 上午09时48分21秒 CST> <Info> <Security> <BEA-090905> <Disabling CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true> <2018-5-21 上午09时48分22秒 CST> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true> <2018-5-21 上午09时48分22秒 CST> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 20.45-b01 from Sun Microsystems Inc.> /root/Oracle/Middleware/user_projects/domains/dzpzk_domain_sj/bin/startWebLogic.sh: line 180: 1948 已杀死 ${JAVA_HOME}/bin/java ${JAVA_VM} ${MEM_ARGS} -Dweblogic.Name=${SERVER_NAME} -Djava.security.policy=${WL_HOME}/server/lib/weblogic.policy ${JAVA_OPTIONS} ${PROXY_SETTINGS} ${SERVER_CLASS}
centos7 静默安装oracle10g,启动监听时报错,怎么整呀,大神帮帮忙!!
执行 $ORACLE_HOME/bin/netca /silent /responseFile /home/oracle/database/response/netca.rsp静默安装监听 Exception in thread "main" java.lang.UnsatisfiedLinkError: /usr/oracle/product/10.2.0/db_1/jdk/jre/lib/i386/libawt.so: /lib/tls/i686/libc.so.6: version `GLIBC_2.4' not found (required by /lib/libXp.so.6) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1586) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1503) at java.lang.Runtime.loadLibrary0(Runtime.java:788) at java.lang.System.loadLibrary(System.java:834) at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(NativeLibLoader.java:38) at sun.awt.DebugHelper.<clinit>(DebugHelper.java:29) at java.awt.Component.<clinit>(Component.java:506)
centos安装weblogic,启动报错
我在虚拟机上的centos上安装了weblogic10.3.5,安装成功并新建了域, 但在启动的时候有报错:starting weblogic with Java version: Unrecognized option: -jrockit Error: Could not create the Java Virtual Machine. 我jdk是用的1.7. 具体报错如下: JAVA Memory arguments: -Xms512m -Xmx512m . WLS Start Mode=Development . CLASSPATH=/home/weblogic/Oracle/Middleware/patch_wls1035/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/home/weblogic/Oracle/Middleware/patch_ocp360/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.7.0_79/lib/tools.jar:/home/weblogic/Oracle/Middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/home/weblogic/Oracle/Middleware/wlserver_10.3/server/lib/weblogic.jar:/home/weblogic/Oracle/Middleware/modules/features/weblogic.server.modules_10.3.5.0.jar:/home/weblogic/Oracle/Middleware/wlserver_10.3/server/lib/webservices.jar:/home/weblogic/Oracle/Middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/home/weblogic/Oracle/Middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar:/home/weblogic/Oracle/Middleware/wlserver_10.3/common/derby/lib/derbyclient.jar:/home/weblogic/Oracle/Middleware/wlserver_10.3/server/lib/xqrl.jar . PATH=/home/weblogic/Oracle/Middleware/wlserver_10.3/server/bin:/home/weblogic/Oracle/Middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.7.0_79/jre/bin:/usr/java/jdk1.7.0_79/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/weblogic/bin . *************************************************** * To start WebLogic Server, use a username and * * password assigned to an admin-level user. For * * server administration, use the WebLogic Server * * console at http://hostname:port/console * *************************************************** starting weblogic with Java version: Unrecognized option: -jrockit Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. Starting WLS with line: /usr/java/jdk1.7.0_79/bin/java -jrockit -Xms512m -Xmx512m -Dweblogic.Name=AdminServer -Djava.security.policy=/home/weblogic/Oracle/Middleware/wlserver_10.3/server/lib/weblogic.policy -Xverify:none -da -Dplatform.home=/home/weblogic/Oracle/Middleware/wlserver_10.3 -Dwls.home=/home/weblogic/Oracle/Middleware/wlserver_10.3/server -Dweblogic.home=/home/weblogic/Oracle/Middleware/wlserver_10.3/server -Dweblogic.management.discover=true -Dwlw.iterativeDev= -Dwlw.testConsole= -Dwlw.logErrorsToConsole= -Dweblogic.ext.dirs=/home/weblogic/Oracle/Middleware/patch_wls1035/profiles/default/sysext_manifest_classpath:/home/weblogic/Oracle/Middleware/patch_ocp360/profiles/default/sysext_manifest_classpath weblogic.Server Unrecognized option: -jrockit Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit.
weblogic服务启动成功,但一直进不去控制台,求解?
[weblogic@hiaward bin]$ ./startWebLogic.sh . . JAVA Memory arguments: -Xms512m -Xmx1024m -XX:MaxPermSize=256m . WLS Start Mode=Production . CLASSPATH=/opt/weblogic/patch_wls1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/opt/JDK/jdk1.6.0_30/lib/tools.jar:/opt/weblogic/utils/config/10.3.1.0/config-launch.jar:/opt/weblogic/wlserver_10.3/server/lib/weblogic_sp.jar:/opt/weblogic/wlserver_10.3/server/lib/weblogic.jar:/opt/weblogic/modules/features/weblogic.server.modules_10.3.1.0.jar:/opt/weblogic/wlserver_10.3/server/lib/webservices.jar:/opt/weblogic/modules/org.apache.ant_1.7.0/lib/ant-all.jar:/opt/weblogic/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar:/opt/weblogic/wlserver_10.3/common/eval/pointbase/lib/pbclient57.jar:/opt/weblogic/wlserver_10.3/server/lib/xqrl.jar:.:/opt/JDK/jdk1.6.0_30/lib/dt.jar:/opt/JDK/jdk1.6.0_30/lib/tools.jar . PATH=/opt/weblogic/wlserver_10.3/server/bin:/opt/weblogic/modules/org.apache.ant_1.7.0/bin:/opt/JDK/jdk1.6.0_30/jre/bin:/opt/JDK/jdk1.6.0_30/bin:/opt/JDK/jdk1.6.0_30/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin . *************************************************** * To start WebLogic Server, use a username and * * password assigned to an admin-level user. For * * server administration, use the WebLogic Server * * console at http://hostname:port/console * *************************************************** starting weblogic with Java version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode) Starting WLS with line: /opt/JDK/jdk1.6.0_30/bin/java -server -Xms512m -Xmx1024m -XX:MaxPermSize=256m -Dweblogic.Name=AdminServer -Djava.security.policy=/opt/weblogic/wlserver_10.3/server/lib/weblogic.policy -da -Dplatform.home=/opt/weblogic/wlserver_10.3 -Dwls.home=/opt/weblogic/wlserver_10.3/server -Dweblogic.home=/opt/weblogic/wlserver_10.3/server -Dweblogic.management.discover=true -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false -Dweblogic.ext.dirs=/opt/weblogic/patch_wls1031/profiles/default/sysext_manifest_classpath-Dfile.encoding=GBK weblogic.Server <Jan 16, 2014 1:21:36 AM CST> <Notice> <WebLogicServer> <BEA-000395> <Following extensions directory contents added to the end of the classpath: /opt/weblogic/wlserver_10.3/L10N/beehive_ja.jar:/opt/weblogic/wlserver_10.3/L10N/beehive_ko.jar:/opt/weblogic/wlserver_10.3/L10N/beehive_zh_CN.jar:/opt/weblogic/wlserver_10.3/L10N/beehive_zh_TW.jar:/opt/weblogic/wlserver_10.3/L10N/p13n_wls_ja.jar:/opt/weblogic/wlserver_10.3/L10N/p13n_wls_ko.jar:/opt/weblogic/wlserver_10.3/L10N/p13n_wls_zh_CN.jar:/opt/weblogic/wlserver_10.3/L10N/p13n_wls_zh_TW.jar:/opt/weblogic/wlserver_10.3/L10N/testclient_ja.jar:/opt/weblogic/wlserver_10.3/L10N/testclient_ko.jar:/opt/weblogic/wlserver_10.3/L10N/testclient_zh_CN.jar:/opt/weblogic/wlserver_10.3/L10N/testclient_zh_TW.jar:/opt/weblogic/wlserver_10.3/L10N/tuxedocontrol_ja.jar:/opt/weblogic/wlserver_10.3/L10N/tuxedocontrol_ko.jar:/opt/weblogic/wlserver_10.3/L10N/tuxedocontrol_zh_CN.jar:/opt/weblogic/wlserver_10.3/L10N/tuxedocontrol_zh_TW.jar:/opt/weblogic/wlserver_10.3/L10N/workshop_ja.jar:/opt/weblogic/wlserver_10.3/L10N/workshop_ko.jar:/opt/weblogic/wlserver_10.3/L10N/workshop_zh_CN.jar:/opt/weblogic/wlserver_10.3/L10N/workshop_zh_TW.jar> <Jan 16, 2014 1:21:36 AM CST> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 20.5-b03 from Sun Microsystems Inc.> <Jan 16, 2014 1:21:36 AM CST> <Info> <Management> <BEA-141107> <Version: WebLogic Server 10.3.1.0 Wed Jun 10 22:24:41 MDT 2009 1227385 > <Jan 16, 2014 1:21:38 AM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> <Jan 16, 2014 1:21:38 AM CST> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool> <Jan 16, 2014 1:21:38 AM CST> <Notice> <Log Management> <BEA-170019> <The server log file /opt/weblogic/user_projects/domains/xTrace/servers/AdminServer/logs/AdminServer.log is opened. All server side log events will be written to this file.> <Jan 16, 2014 1:21:41 AM CST> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.> <Jan 16, 2014 1:21:44 AM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY> <Jan 16, 2014 1:21:44 AM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> Jan 16, 2014 1:21:46 AM com.sun.faces.config.ConfigureListener contextInitialized INFO: Initializing Sun's JavaServer Faces implementation (1.2_03-b04-FCS) for context '/console' Jan 16, 2014 1:21:46 AM com.sun.faces.config.ConfigureListener contextInitialized INFO: Completed initializing Sun's JavaServer Faces implementation (1.2_03-b04-FCS) for context '/console' <Jan 16, 2014 1:21:47 AM CST> <Notice> <Log Management> <BEA-170027> <The Server has established connection with the Domain level Diagnostic Service successfully.> <Jan 16, 2014 1:21:47 AM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN> <Jan 16, 2014 1:21:47 AM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING> <Jan 16, 2014 1:21:47 AM CST> <Notice> <Server> <BEA-002613> <Channel "Default[1]" is now listening on fe80:0:0:0:223:aeff:fea2:a7a6:7001 for protocols iiop, t3, ldap, snmp, http.> <Jan 16, 2014 1:21:47 AM CST> <Notice> <Server> <BEA-002613> <Channel "Default[3]" is now listening on 127.0.0.1:7001 for protocols iiop, t3, ldap, snmp, http.> <Jan 16, 2014 1:21:47 AM CST> <Notice> <Server> <BEA-002613> <Channel "Default[2]" is now listening on 0:0:0:0:0:0:0:1:7001 for protocols iiop, t3, ldap, snmp, http.> <Jan 16, 2014 1:21:47 AM CST> <Notice> <Server> <BEA-002613> <Channel "Default" is now listening on 192.168.13.246:7001 for protocols iiop, t3, ldap, snmp, http.> <Jan 16, 2014 1:21:47 AM CST> <Notice> <WebLogicServer> <BEA-000329> <Started WebLogic Admin Server "AdminServer" for domain "xTrace" running in Production Mode> <Jan 16, 2014 1:21:47 AM CST> <Warning> <Server> <BEA-002611> <Hostname "localhost6.localdomain6", maps to multiple IP addresses: 202.102.110.203, 0:0:0:0:0:0:0:1> <Jan 16, 2014 1:21:47 AM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING> <Jan 16, 2014 1:21:47 AM CST> <Notice> <WebLogicServer> <BEA-000360> <Server started in RUNNING mode>
weblogic 12.1.1启动后又被自己shutdown了。求助大侠。
[weblogic@songpy base_domain]$ ./startWebLogic.sh . . JAVA Memory arguments: -Xms256m -Xmx512m -XX:MaxPermSize=256m . WLS Start Mode=Production . CLASSPATH=/u02/weblogic/patch_wls1211/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u02/weblogic/patch_oepe101/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u02/weblogic/patch_ocp371/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u02/weblogic/jdk160_29/lib/tools.jar:/u02/weblogic/wlserver_12.1/server/lib/weblogic_sp.jar:/u02/weblogic/wlserver_12.1/server/lib/weblogic.jar:/u02/weblogic/modules/features/weblogic.server.modules_12.1.1.0.jar:/u02/weblogic/wlserver_12.1/server/lib/webservices.jar:/u02/weblogic/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/u02/weblogic/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar:/u02/weblogic/wlserver_12.1/common/derby/lib/derbyclient.jar:/u02/weblogic/wlserver_12.1/server/lib/xqrl.jar:/usr/java/jdk1.6.0_22/lib . PATH=/u02/weblogic/wlserver_12.1/server/bin:/u02/weblogic/modules/org.apache.ant_1.7.1/bin:/u02/weblogic/jdk160_29/jre/bin:/u02/weblogic/jdk160_29/bin:/usr/java/jdk1.6.0_22/bin:/usr/lib/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin . *************************************************** * To start WebLogic Server, use a username and * * password assigned to an admin-level user. For * * server administration, use the WebLogic Server * * console at http://hostname:port/console * *************************************************** starting weblogic with Java version: java version "1.6.0_29" Java(TM) SE Runtime Environment (build 1.6.0_29-b11) Java HotSpot(TM) Server VM (build 20.4-b02, mixed mode) Starting WLS with line: /u02/weblogic/jdk160_29/bin/java -server -Xms256m -Xmx512m -XX:MaxPermSize=256m -Dweblogic.Name=AdminServer -Djava.security.policy=/u02/weblogic/wlserver_12.1/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -Djava.endorsed.dirs=/u02/weblogic/jdk160_29/jre/lib/endorsed:/u02/weblogic/wlserver_12.1/endorsed -da -Dplatform.home=/u02/weblogic/wlserver_12.1 -Dwls.home=/u02/weblogic/wlserver_12.1/server -Dweblogic.home=/u02/weblogic/wlserver_12.1/server -Dweblogic.management.discover=true -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false -Dweblogic.ext.dirs=/u02/weblogic/patch_wls1211/profiles/default/sysext_manifest_classpath:/u02/weblogic/patch_oepe101/profiles/default/sysext_manifest_classpath:/u02/weblogic/patch_ocp371/profiles/default/sysext_manifest_classpath weblogic.Server <Feb 20, 2014 4:02:44 PM CST> <Info> <Security> <BEA-090905> <Disabling CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true> <Feb 20, 2014 4:02:44 PM CST> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true> <Feb 20, 2014 4:02:45 PM CST> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) Server VM Version 20.4-b02 from Sun Microsystems Inc..> <Feb 20, 2014 4:02:48 PM CST> <Info> <Management> <BEA-141107> <Version: WebLogic Server Temporary Patch for 13340309 Thu Feb 16 18:30:21 IST 2012 WebLogic Server Temporary Patch for 13019800 Mon Jan 16 16:53:54 IST 2012 WebLogic Server Temporary Patch for BUG13391585 Thu Feb 02 10:18:36 IST 2012 WebLogic Server Temporary Patch for 13516712 Mon Jan 30 15:09:33 IST 2012 WebLogic Server Temporary Patch for BUG13641115 Tue Jan 31 11:19:13 IST 2012 WebLogic Server Temporary Patch for BUG13603813 Wed Feb 15 19:34:13 IST 2012 WebLogic Server Temporary Patch for 13424251 Mon Jan 30 14:32:34 IST 2012 WebLogic Server Temporary Patch for 13361720 Mon Jan 30 15:24:05 IST 2012 WebLogic Server Temporary Patch for BUG13421471 Wed Feb 01 11:24:18 IST 2012 WebLogic Server Temporary Patch for BUG13657792 Thu Feb 23 12:57:33 IST 2012 WebLogic Server 12.1.1.0 Wed Dec 7 08:40:57 PST 2011 1445491 > <Feb 20, 2014 4:02:52 PM CST> <Info> <Security> <BEA-090065> <Getting boot identity from user.> Enter username to boot WebLogic server:weblogic Enter password to boot WebLogic server: <Feb 20, 2014 4:03:04 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.> <Feb 20, 2014 4:03:04 PM CST> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool.> <Feb 20, 2014 4:03:04 PM CST> <Notice> <Log Management> <BEA-170019> <The server log file /u02/weblogic/user_projects/domains/base_domain/servers/AdminServer/logs/AdminServer.log is opened. All server side log events will be written to this file.> <Feb 20, 2014 4:03:11 PM CST> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.> <Feb 20, 2014 4:03:19 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY.> <Feb 20, 2014 4:03:19 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <Log Management> <BEA-170027> <The server has successfully established a connection with the Domain level Diagnostic Service.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <Server> <BEA-002613> <Channel "Default" is now listening on 192.168.175.149:7001 for protocols iiop, t3, ldap, snmp, http.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <Server> <BEA-002613> <Channel "Default[1]" is now listening on fe80:0:0:0:a00:27ff:fe88:81c5:7001 for protocols iiop, t3, ldap, snmp, http.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <Server> <BEA-002613> <Channel "Default[2]" is now listening on 127.0.0.1:7001 for protocols iiop, t3, ldap, snmp, http.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <Server> <BEA-002613> <Channel "Default[3]" is now listening on 0:0:0:0:0:0:0:1:7001 for protocols iiop, t3, ldap, snmp, http.> <Feb 20, 2014 4:03:32 PM CST> <Warning> <Server> <BEA-002611> <The hostname "localhost", maps to multiple IP addresses: 127.0.0.1, 0:0:0:0:0:0:0:1.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <WebLogicServer> <BEA-000329> <Started the WebLogic Server Administration Server "AdminServer" for domain "base_domain" running in production mode.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING.> <Feb 20, 2014 4:03:32 PM CST> <Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.> Feb 20, 2014 4:03:36 PM com.sun.xml.ws.model.JavaMethodImpl freeze WARNING: Input Action on WSDL operation PreparedOperation and @Action on its associated Web Method preparedOperation did not match and will cause problems in dispatching the requests Feb 20, 2014 4:03:36 PM com.sun.xml.ws.model.JavaMethodImpl freeze WARNING: Input Action on WSDL operation AbortedOperation and @Action on its associated Web Method abortedOperation did not match and will cause problems in dispatching the requests Feb 20, 2014 4:03:36 PM com.sun.xml.ws.model.JavaMethodImpl freeze WARNING: Input Action on WSDL operation ReadOnlyOperation and @Action on its associated Web Method readOnlyOperation did not match and will cause problems in dispatching the requests Feb 20, 2014 4:03:36 PM com.sun.xml.ws.model.JavaMethodImpl freeze WARNING: Input Action on WSDL operation CommittedOperation and @Action on its associated Web Method committedOperation did not match and will cause problems in dispatching the requests Feb 20, 2014 4:03:36 PM com.sun.xml.ws.model.JavaMethodImpl freeze WARNING: Input Action on WSDL operation ReplayOperation and @Action on its associated Web Method replayOperation did not match and will cause problems in dispatching the requests Feb 20, 2014 4:03:37 PM com.sun.xml.ws.model.JavaMethodImpl freeze WARNING: Input Action on WSDL operation RegisterOperation and @Action on its associated Web Method registerOperation did not match and will cause problems in dispatching the requests ^C<Feb 20, 2014 4:05:26 PM CST> <Notice> <WebLogicServer> <BEA-000388> <JVM called the WebLogic Server shutdown hook. The server will force shutdown now.> <Feb 20, 2014 4:05:26 PM CST> <Alert> <WebLogicServer> <BEA-000396> <Server shutdown has been requested by <WLS Kernel>.> <Feb 20, 2014 4:05:26 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SUSPENDING.> <Feb 20, 2014 4:05:26 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.> <Feb 20, 2014 4:05:26 PM CST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.> <Feb 20, 2014 4:05:26 PM CST> <Notice> <Server> <BEA-002607> <Channel "Default", listening on 192.168.175.149:7001, was shut down.> <Feb 20, 2014 4:05:26 PM CST> <Notice> <Server> <BEA-002607> <Channel "Default[1]", listening on fe80:0:0:0:a00:27ff:fe88:81c5:7001, was shut down.> <Feb 20, 2014 4:05:26 PM CST> <Notice> <Server> <BEA-002607> <Channel "Default[2]", listening on 127.0.0.1:7001, was shut down.> <Feb 20, 2014 4:05:26 PM CST> <Notice> <Server> <BEA-002607> <Channel "Default[3]", listening on 0:0:0:0:0:0:0:1:7001, was shut down.>
请问TOCAT+SSL的问题
我配TOCAT+SSL 总不成功:不知道是否是因为tomcat是zip的原因? 我的过程: hosts:127.0.0.1 localhost JAVA_HOME:C:\JAVA-SET\Java\jdk1.6.0_16 CATALINA_HOME:D:\MyEclipseSet\apache-tomcat (version:apache-tomcat-6.0.20.zip) 步骤 c:\>keytool -genkey -v -alias tomcat -keyalg RSA -keystore c:\tomcat.keystore -dname "CN=localhost,OU=cn,O=cn,L=cn,ST=cn,C=cn" -storepass changeit -keypass changeit c:\>keytool -export -alias tomcat -keystore c:\tomcat.keystore -file server.crt c:\>keytool -import -alias tomcat -file c:\server.crt -keystore %JAVA_HOME%/jre/lib/security/cacerts 拷备 c:\tomcat.keystore 到D:\MyEclipseSet\apache-tomcat server.xml <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="D:\MyEclipseSet\apache-tomcat\tomcat.keystore" keystorePass="changeit" truststoreFile="C:\JAVA-SET\Java\jdk1.6.0_16\jre\lib\security\cacerts"/> 请给予帮助。 另外提示: c:\tomcat.keystore c:\server.crt 都能够找到 我用keytool -list -v -keystore %JAVA_HOME%/jre/lib/security/cacerts > d:\t.txt 在d:\t.txt 是能够找到刚才导入的tomcat这个别名 别名名称: tomcat 创建日期: 2009-8-30 输入类型: trustedCertEntry 所有者:CN=localhost, OU=cn, O=cn, L=cn, ST=cn, C=cn.... --------------- 用firefox出现这样的错误 该证书因为是自签名而不被信任 错误码:SEC_ERROR_CA_CERT_INVALID 我的环境:JAVA_HOME:C:\JAVA-SET\Java\jdk1.6.0_16 用IE 打开是这样的界面 安全证书问题可能显示试图欺骗您或截获您向服务器发送的数据。 建议关闭此网页,并且不要继续浏览该网站。 单击此处关闭该网页。 继续浏览此网站(不推荐)。 点击 继续浏览此网站(不推荐)超连接,地址栏出现:res://ieframe.dll/ 网页无法显示
tomcat 中 https 的配置问题
我是按下面的方法进行配置的(网上找的): 1.生成 server key : 以命令行方式切换到目录%TOMCAT_HOME%,在command命令行输入如下命令(jdk1.4以上带的工具): keytool -genkey -alias tomcat -keyalg RSA -keypass changeit -storepass changeit -keystore server.keystore -validity 3600 2.将证书导入的JDK的证书信任库中: (1)keytool -export -trustcacerts -alias tomcat -file server.cer -keystore server.keystore -storepass changeit (2)keytool -import -trustcacerts -alias tomcat -file server.cer -keystore %JAVA_HOME%/jre/lib/security/cacerts -storepass changeit 3.Tomcat中的server.xml文件修改: <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="8443" minSpareThreads="5" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keystoreFile="D:\apache-tomcat-6.0.14\server.keystore" keystorePass="changeit"/> 4.验证: https://localhost:8443/ 5.问题来了,输入https://localhost:8443/,页面能出来,但IE地址栏中出现“证书错误”。 [img]/upload/attachment/139891/dd38e952-5a03-373d-a72b-24f46f524d8b.jpg[/img] 6.把刚刚用命令行生成的证书导入到IE中,“证书错误”还是解决不了。 [img]/upload/attachment/139896/353f1b45-d5e3-35aa-a590-daaf8d292d84.jpg[/img] 请问这方面有经验的兄弟,“证书错误”怎么样配置才能消失?谢谢。另:这样做,用https访问的话,是不是就把URL加密了? [b]问题补充:[/b] 我就是按这个进行配置的,启动Tomcat后,访问 https://localhost:8443/是没有问题的,但“证书错误”一直不能摆平,我也试了在IE里导入证书,但不管用。。。
关于ApplicationStartingEvent的问题
``` "C:\Program Files\Java\jdk1.8.0_66\bin\java.exe" -XX:TieredStopAtLevel=1 -noverify -Dspring.output.ansi.enabled=always -Dcom.sun.management.jmxremote -Dspring.jmx.enabled=true -Dspring.liveBeansView.mbeanDomain -Dspring.application.admin.enabled=true "-javaagent:D:\IDEA\IntelliJ IDEA 2019.1.1\lib\idea_rt.jar=49516:D:\IDEA\IntelliJ IDEA 2019.1.1\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_66\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_66\jre\lib\rt.jar;D:\Test\mcrm-seven\target\classes;D:\apache-tomcat-7.0.10\apache-tomcat-7.0.10\lib\jsp-api.jar;D:\apache-tomcat-7.0.10\apache-tomcat-7.0.10\lib\servlet-api.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\ojdbc14-10.2.0.4.0.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\pinyin4j-2.5.0.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\UserAgentUtils-1.13.jar;D:\Test\mcrm-seven\lib\javax.persistence.jar;D:\Test\mcrm-seven\lib\javax.transaction.jar;D:\Test\mcrm-seven\lib\javax.jms.jar;D:\Test\mcrm-seven\lib\javax.ejb.jar;D:\Test\mcrm-seven\lib\javax.servlet.jsp.jstl.jar;D:\Test\mcrm-seven\lib\javax.resource.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-log4j\1.3.8.RELEASE\spring-boot-starter-log4j-1.3.8.RELEASE.jar;C:\Users\17268\.m2\repository\org\slf4j\jcl-over-slf4j\1.7.7\jcl-over-slf4j-1.7.7.jar;C:\Users\17268\.m2\repository\org\slf4j\jul-to-slf4j\1.7.7\jul-to-slf4j-1.7.7.jar;C:\Users\17268\.m2\repository\org\slf4j\slf4j-log4j12\1.7.7\slf4j-log4j12-1.7.7.jar;C:\Users\17268\.m2\repository\log4j\log4j\1.2.17\log4j-1.2.17.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-logging\2.2.2.RELEASE\spring-boot-starter-logging-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\com\alibaba\druid-spring-boot-starter\1.1.9\druid-spring-boot-starter-1.1.9.jar;C:\Users\17268\.m2\repository\com\alibaba\druid\1.1.9\druid-1.1.9.jar;C:\Users\17268\.m2\repository\org\slf4j\slf4j-api\1.7.7\slf4j-api-1.7.7.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-autoconfigure\2.2.2.RELEASE\spring-boot-autoconfigure-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\net\sf\dozer\dozer\5.5.1\dozer-5.5.1.jar;C:\Users\17268\.m2\repository\org\freemarker\freemarker\2.3.20\freemarker-2.3.20.jar;C:\Users\17268\.m2\repository\junit\junit\4.11\junit-4.11.jar;C:\Users\17268\.m2\repository\org\hamcrest\hamcrest-core\2.1\hamcrest-core-2.1.jar;C:\Users\17268\.m2\repository\com\fasterxml\jackson\core\jackson-core\2.2.3\jackson-core-2.2.3.jar;C:\Users\17268\.m2\repository\com\fasterxml\jackson\core\jackson-databind\2.2.3\jackson-databind-2.2.3.jar;C:\Users\17268\.m2\repository\com\fasterxml\jackson\core\jackson-annotations\2.2.3\jackson-annotations-2.2.3.jar;C:\Users\17268\.m2\repository\com\fasterxml\jackson\module\jackson-module-jaxb-annotations\2.2.3\jackson-module-jaxb-annotations-2.2.3.jar;C:\Users\17268\.m2\repository\net\sf\ehcache\ehcache-core\2.6.11\ehcache-core-2.6.11.jar;C:\Users\17268\.m2\repository\net\sf\ehcache\ehcache-web\2.0.4\ehcache-web-2.0.4.jar;C:\Users\17268\.m2\repository\redis\clients\jedis\3.1.0\jedis-3.1.0.jar;C:\Users\17268\.m2\repository\org\apache\commons\commons-pool2\2.7.0\commons-pool2-2.7.0.jar;C:\Users\17268\.m2\repository\com\thoughtworks\xstream\xstream\1.4.7\xstream-1.4.7.jar;C:\Users\17268\.m2\repository\xmlpull\xmlpull\1.1.3.1\xmlpull-1.1.3.1.jar;C:\Users\17268\.m2\repository\xpp3\xpp3_min\1.1.4c\xpp3_min-1.1.4c.jar;C:\Users\17268\.m2\repository\org\apache\commons\commons-lang3\3.3.2\commons-lang3-3.3.2.jar;C:\Users\17268\.m2\repository\commons-io\commons-io\2.4\commons-io-2.4.jar;C:\Users\17268\.m2\repository\commons-codec\commons-codec\1.9\commons-codec-1.9.jar;C:\Users\17268\.m2\repository\commons-fileupload\commons-fileupload\1.3.1\commons-fileupload-1.3.1.jar;C:\Users\17268\.m2\repository\commons-beanutils\commons-beanutils\1.9.1\commons-beanutils-1.9.1.jar;C:\Users\17268\.m2\repository\commons-collections\commons-collections\3.2.1\commons-collections-3.2.1.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\analyzer-2012_u6.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\thumbnailator-0.4.2.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\apache-ant-zip-2.3.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\ckfinder-2.3.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\ckfinderplugin-fileeditor-2.3.jar;D:\Test\mcrm-seven\src\main\webapp\WEB-INF\lib\ckfinderplugin-imageresize-2.3.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-data-redis\2.2.2.RELEASE\spring-boot-starter-data-redis-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter\2.2.2.RELEASE\spring-boot-starter-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\yaml\snakeyaml\1.25\snakeyaml-1.25.jar;C:\Users\17268\.m2\repository\org\springframework\data\spring-data-redis\2.2.3.RELEASE\spring-data-redis-2.2.3.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\data\spring-data-keyvalue\2.2.3.RELEASE\spring-data-keyvalue-2.2.3.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\data\spring-data-commons\2.2.3.RELEASE\spring-data-commons-2.2.3.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-oxm\5.2.2.RELEASE\spring-oxm-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-aop\5.2.2.RELEASE\spring-aop-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-context-support\5.2.2.RELEASE\spring-context-support-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\io\lettuce\lettuce-core\5.2.1.RELEASE\lettuce-core-5.2.1.RELEASE.jar;C:\Users\17268\.m2\repository\io\netty\netty-common\4.1.43.Final\netty-common-4.1.43.Final.jar;C:\Users\17268\.m2\repository\io\netty\netty-handler\4.1.43.Final\netty-handler-4.1.43.Final.jar;C:\Users\17268\.m2\repository\io\netty\netty-buffer\4.1.43.Final\netty-buffer-4.1.43.Final.jar;C:\Users\17268\.m2\repository\io\netty\netty-codec\4.1.43.Final\netty-codec-4.1.43.Final.jar;C:\Users\17268\.m2\repository\io\netty\netty-transport\4.1.43.Final\netty-transport-4.1.43.Final.jar;C:\Users\17268\.m2\repository\io\netty\netty-resolver\4.1.43.Final\netty-resolver-4.1.43.Final.jar;C:\Users\17268\.m2\repository\io\projectreactor\reactor-core\3.3.1.RELEASE\reactor-core-3.3.1.RELEASE.jar;C:\Users\17268\.m2\repository\org\reactivestreams\reactive-streams\1.0.3\reactive-streams-1.0.3.jar;C:\Users\17268\.m2\repository\org\apache\poi\poi\3.9\poi-3.9.jar;C:\Users\17268\.m2\repository\org\apache\poi\poi-ooxml\3.9\poi-ooxml-3.9.jar;C:\Users\17268\.m2\repository\dom4j\dom4j\1.6.1\dom4j-1.6.1.jar;C:\Users\17268\.m2\repository\xml-apis\xml-apis\1.0.b2\xml-apis-1.0.b2.jar;C:\Users\17268\.m2\repository\org\apache\poi\poi-ooxml-schemas\3.9\poi-ooxml-schemas-3.9.jar;C:\Users\17268\.m2\repository\org\apache\xmlbeans\xmlbeans\2.3.0\xmlbeans-2.3.0.jar;C:\Users\17268\.m2\repository\stax\stax-api\1.0.1\stax-api-1.0.1.jar;C:\Users\17268\.m2\repository\com\drewnoakes\metadata-extractor\2.6.2\metadata-extractor-2.6.2.jar;C:\Users\17268\.m2\repository\com\adobe\xmp\xmpcore\5.1.2\xmpcore-5.1.2.jar;C:\Users\17268\.m2\repository\xerces\xercesImpl\2.8.1\xercesImpl-2.8.1.jar;C:\Users\17268\.m2\repository\com\google\zxing\core\2.2\core-2.2.jar;C:\Users\17268\.m2\repository\com\google\zxing\javase\2.2\javase-2.2.jar;C:\Users\17268\.m2\repository\com\google\guava\guava\17.0\guava-17.0.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-engine\5.21.0\activiti-engine-5.21.0.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-bpmn-converter\5.21.0\activiti-bpmn-converter-5.21.0.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-bpmn-model\5.21.0\activiti-bpmn-model-5.21.0.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-process-validation\5.21.0\activiti-process-validation-5.21.0.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-image-generator\5.21.0\activiti-image-generator-5.21.0.jar;C:\Users\17268\.m2\repository\org\apache\commons\commons-email\1.4\commons-email-1.4.jar;C:\Users\17268\.m2\repository\com\sun\mail\javax.mail\1.5.2\javax.mail-1.5.2.jar;C:\Users\17268\.m2\repository\javax\activation\activation\1.1.1\activation-1.1.1.jar;C:\Users\17268\.m2\repository\org\springframework\spring-beans\5.2.2.RELEASE\spring-beans-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\joda-time\joda-time\2.10.5\joda-time-2.10.5.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-spring\5.21.0\activiti-spring-5.21.0.jar;C:\Users\17268\.m2\repository\org\springframework\spring-context\5.2.2.RELEASE\spring-context-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-expression\5.2.2.RELEASE\spring-expression-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-jdbc\5.2.2.RELEASE\spring-jdbc-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-tx\5.2.2.RELEASE\spring-tx-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-orm\5.2.2.RELEASE\spring-orm-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\commons-dbcp\commons-dbcp\1.4\commons-dbcp-1.4.jar;C:\Users\17268\.m2\repository\commons-pool\commons-pool\1.6\commons-pool-1.6.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-explorer\5.21.0\activiti-explorer-5.21.0.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-json-converter\5.21.0\activiti-json-converter-5.21.0.jar;C:\Users\17268\.m2\repository\math\geom2d\javaGeom\0.11.1\javaGeom-0.11.1.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-crystalball\5.21.0\activiti-crystalball-5.21.0.jar;C:\Users\17268\.m2\repository\com\h2database\h2\1.4.200\h2-1.4.200.jar;C:\Users\17268\.m2\repository\org\springframework\spring-core\5.2.2.RELEASE\spring-core-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-jcl\5.2.2.RELEASE\spring-jcl-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\spring-web\5.2.2.RELEASE\spring-web-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\imgscalr\imgscalr-lib\4.2\imgscalr-lib-4.2.jar;C:\Users\17268\.m2\repository\org\codehaus\groovy\groovy-all\2.4.5\groovy-all-2.4.5.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-modeler\5.21.0\activiti-modeler-5.21.0.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-common-rest\5.21.0\activiti-common-rest-5.21.0.jar;C:\Users\17268\.m2\repository\org\springframework\spring-webmvc\5.2.2.RELEASE\spring-webmvc-5.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\security\spring-security-config\5.2.1.RELEASE\spring-security-config-5.2.1.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\security\spring-security-core\5.2.1.RELEASE\spring-security-core-5.2.1.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\security\spring-security-crypto\5.2.1.RELEASE\spring-security-crypto-5.2.1.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\security\spring-security-web\5.2.1.RELEASE\spring-security-web-5.2.1.RELEASE.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-parser\1.7\batik-parser-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-awt-util\1.7\batik-awt-util-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-util\1.7\batik-util-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-xml\1.7\batik-xml-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-transcoder\1.7\batik-transcoder-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\fop\0.94\fop-0.94.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\xmlgraphics-commons\1.2\xmlgraphics-commons-1.2.jar;C:\Users\17268\.m2\repository\commons-logging\commons-logging\1.0.4\commons-logging-1.0.4.jar;C:\Users\17268\.m2\repository\org\apache\avalon\framework\avalon-framework-api\4.3.1\avalon-framework-api-4.3.1.jar;C:\Users\17268\.m2\repository\org\apache\avalon\framework\avalon-framework-impl\4.3.1\avalon-framework-impl-4.3.1.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-dom\1.7\batik-dom-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-gvt\1.7\batik-gvt-1.7.jar;C:\Users\17268\.m2\repository\xml-apis\xml-apis-ext\1.3.04\xml-apis-ext-1.3.04.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-bridge\1.7\batik-bridge-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-anim\1.7\batik-anim-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-ext\1.7\batik-ext-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-script\1.7\batik-script-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-js\1.7\batik-js-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-codec\1.7\batik-codec-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-css\1.7\batik-css-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-svg-dom\1.7\batik-svg-dom-1.7.jar;C:\Users\17268\.m2\repository\org\apache\xmlgraphics\batik-svggen\1.7\batik-svggen-1.7.jar;C:\Users\17268\.m2\repository\org\activiti\activiti-diagram-rest\5.21.0\activiti-diagram-rest-5.21.0.jar;C:\Users\17268\.m2\repository\org\apache\shiro\shiro-core\1.2.3\shiro-core-1.2.3.jar;C:\Users\17268\.m2\repository\org\apache\shiro\shiro-spring\1.2.3\shiro-spring-1.2.3.jar;C:\Users\17268\.m2\repository\org\apache\shiro\shiro-cas\1.2.3\shiro-cas-1.2.3.jar;C:\Users\17268\.m2\repository\org\jasig\cas\client\cas-client-core\3.2.1\cas-client-core-3.2.1.jar;C:\Users\17268\.m2\repository\org\apache\shiro\shiro-web\1.2.3\shiro-web-1.2.3.jar;C:\Users\17268\.m2\repository\org\apache\shiro\shiro-ehcache\1.2.3\shiro-ehcache-1.2.3.jar;C:\Users\17268\.m2\repository\mysql\mysql-connector-java\8.0.18\mysql-connector-java-8.0.18.jar;C:\Users\17268\.m2\repository\org\mybatis\spring\boot\mybatis-spring-boot-starter\1.3.2\mybatis-spring-boot-starter-1.3.2.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-jdbc\2.2.2.RELEASE\spring-boot-starter-jdbc-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\com\zaxxer\HikariCP\3.4.1\HikariCP-3.4.1.jar;C:\Users\17268\.m2\repository\org\mybatis\spring\boot\mybatis-spring-boot-autoconfigure\1.3.2\mybatis-spring-boot-autoconfigure-1.3.2.jar;C:\Users\17268\.m2\repository\org\mybatis\mybatis-spring\1.3.2\mybatis-spring-1.3.2.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-web\2.2.2.RELEASE\spring-boot-starter-web-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-json\2.2.2.RELEASE\spring-boot-starter-json-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jdk8\2.10.1\jackson-datatype-jdk8-2.10.1.jar;C:\Users\17268\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jsr310\2.10.1\jackson-datatype-jsr310-2.10.1.jar;C:\Users\17268\.m2\repository\com\fasterxml\jackson\module\jackson-module-parameter-names\2.10.1\jackson-module-parameter-names-2.10.1.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-validation\2.2.2.RELEASE\spring-boot-starter-validation-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\jakarta\validation\jakarta.validation-api\2.0.1\jakarta.validation-api-2.0.1.jar;C:\Users\17268\.m2\repository\org\hibernate\validator\hibernate-validator\6.0.18.Final\hibernate-validator-6.0.18.Final.jar;C:\Users\17268\.m2\repository\org\jboss\logging\jboss-logging\3.4.1.Final\jboss-logging-3.4.1.Final.jar;C:\Users\17268\.m2\repository\com\fasterxml\classmate\1.5.1\classmate-1.5.1.jar;C:\Users\17268\.m2\repository\org\projectlombok\lombok\1.18.10\lombok-1.18.10.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot-starter-tomcat\2.2.2.RELEASE\spring-boot-starter-tomcat-2.2.2.RELEASE.jar;C:\Users\17268\.m2\repository\jakarta\annotation\jakarta.annotation-api\1.3.5\jakarta.annotation-api-1.3.5.jar;C:\Users\17268\.m2\repository\org\hamcrest\hamcrest\2.1\hamcrest-2.1.jar;C:\Users\17268\.m2\repository\org\mybatis\mybatis\3.5.3\mybatis-3.5.3.jar;C:\Users\17268\.m2\repository\org\springframework\boot\spring-boot\1.4.3.RELEASE\spring-boot-1.4.3.RELEASE.jar;C:\Users\17268\.m2\repository\org\apache\tomcat\embed\tomcat-embed-core\9.0.29\tomcat-embed-core-9.0.29.jar" com.thinkgem.jeesite.McrmSevenApplication Exception in thread "main" java.lang.NoClassDefFoundError: org/springframework/boot/context/event/ApplicationStartingEvent at org.springframework.boot.autoconfigure.BackgroundPreinitializer.onApplicationEvent(BackgroundPreinitializer.java:68) at org.springframework.boot.autoconfigure.BackgroundPreinitializer.onApplicationEvent(BackgroundPreinitializer.java:50) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:127) at org.springframework.boot.context.event.EventPublishingRunListener.started(EventPublishingRunListener.java:63) at org.springframework.boot.SpringApplicationRunListeners.started(SpringApplicationRunListeners.java:48) at org.springframework.boot.SpringApplication.run(SpringApplication.java:304) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1186) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1175) at com.thinkgem.jeesite.McrmSevenApplication.main(McrmSevenApplication.java:13) Caused by: java.lang.ClassNotFoundException: org.springframework.boot.context.event.ApplicationStartingEvent at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 12 more Process finished with exit code 1 ```
cas sso 配置了,登录有通过数据库,但是跳转失败
1.生成安全证书: 运行cmd 切换到tomcat的目录下: 输入命令 “ keytool -genkey -alias localhost -keyalg RSA -storepass localhost -keystore D:\sso\tomcat-cas\localhost.keystore -validity 3600 ” //此时会生成别名为abc的abc.keystore 文件。 密码是abc 123 导出证书到key文件夹: 命令“ keytool -export -trustcacerts -alias localhost -file D:\sso\tomcat-cas\localhost.cer -keystore D:\sso\tomcat-cas\localhost.keystore -storepass localhost ”//此时会生成abc.cer文件。 将证书导入到JDK证书信任库: 命令“ keytool -import -trustcacerts -alias localhost -file D:\sso\tomcat-cas\localhost.cer -keystore “%JAVA_HOME%\jre\lib\security\cacerts” -storepass changeit ” 系统询问是否信任此证书,回答"y" --------“changeit” 是JDK证书信任库默认的密钥 我用这些指令生成证书,然后也配置了sso\tomcat-cas/conf/server.xml里面的<Connector port=”8443” protocol=”org.apache.coyote.http11.Http11Protocol” SSLEnabled=”true” maxThreads=”150” scheme=”https” secure=”true” clientAuth=”false” sslProtocol=”TLS” keystoreFile="D:/sso/tomcat-cas/abc.keystore" keystorePass="abc123" /> 这整一个sso文件是从别的电脑拷贝过来的,在那台电脑使用正常,而拷贝过来这台电脑就出现了输入帐号名密码,点击登录,浏览器出现10几秒的加载状态然后直接跳出CAS is Unavailable There was an error trying to complete your request. Please notify your support desk or try again. 的信息,请问一下是怎么回事,能不能帮帮忙
急!ActiveMQ集群启动失败 No IOExceptionHandler registered, ignoring IO exception
ActiveMQ是最新5.15的版本,结合zookeeper搭的集群,搭建好第一次用时很正常,然后关机第二天重启以后发现启动失败了,控制台也打不开,大家能帮忙看看问题在哪么? ``` 2019-10-30 19:28:09,199 | INFO | No IOExceptionHandler registered, ignoring IO exception | org.apache.activemq.broker.BrokerService | LevelDB IOException handler. java.io.IOException: com/google/common/util/concurrent/internal/InternalFutureFailureAccess at org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:40)[activemq-client-5.15.10.jar:5.15.10] at org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[activemq-leveldb-store-5.15.10.jar:5.15.10] at org.apache.activemq.leveldb.LevelDBClient.replay_init(LevelDBClient.scala:667)[activemq-leveldb-store-5.15.10.jar:5.15.10] at org.apache.activemq.leveldb.LevelDBClient.start(LevelDBClient.scala:558)[activemq-leveldb-store-5.15.10.jar:5.15.10] at org.apache.activemq.leveldb.DBManager.start(DBManager.scala:648)[activemq-leveldb-store-5.15.10.jar:5.15.10] at org.apache.activemq.leveldb.LevelDBStore.doStart(LevelDBStore.scala:312)[activemq-leveldb-store-5.15.10.jar:5.15.10] at org.apache.activemq.leveldb.replicated.MasterLevelDBStore.doStart(MasterLevelDBStore.scala:110)[activemq-leveldb-store-5.15.10.jar:5.15.10] at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)[activemq-client-5.15.10.jar:5.15.10] at org.apache.activemq.leveldb.replicated.ElectingLevelDBStore$$anonfun$start_master$1.apply$mcV$sp(ElectingLevelDBStore.scala:230)[activemq-leveldb-store-5.15.10.jar:5.15.10] at org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:330)[hawtdispatch-scala-2.11-1.22.jar:1.22] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_191] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_191] at java.lang.Thread.run(Thread.java:748)[:1.8.0_191] 2019-10-30 19:28:09,207 | INFO | Stopped LevelDB[/usr/local/activemq/apache-activemq-5.15.10/data/leveldb] | org.apache.activemq.leveldb.LevelDBStore | LevelDB IOException handler. ``` 查了网上的解决方案 一种是让删除lib文件夹下的一个jar包,但是我看了文件夹里面并没有 另一种让把配置文件xml里面的日志配置注释掉,我注释后再运行,日志内容如下: 2019-10-30 21:09:25,810 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@598067a5: startup date [Wed Oct 30 21:09:25 CST 2019]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main 2019-10-30 21:09:26,471 | INFO | Using Persistence Adapter: Replicated LevelDB[/usr/local/activemq/apache-activemq-5.15.10/data/leveldb, 211.67.17.205:2181,211.67.19.187:2181,211.67.16.224:2181//activemq/leveldb-stores] | org.apache.activemq.broker.BrokerService | main 2019-10-30 21:09:26,572 | INFO | Starting StateChangeDispatcher | org.apache.activemq.leveldb.replicated.groups.ZKClient | ZooKeeper state change dispatcher thread 2019-10-30 21:09:26,578 | INFO | Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:host.name=admin1 | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:java.version=1.8.0_191 | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:java.vendor=Oracle Corporation | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:java.home=/usr/java/jdk1.8.0_191/jre | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:java.class.path=/usr/local/activemq/apache-activemq-5.15.10/bin/activemq.jar | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:java.io.tmpdir=/usr/local/activemq/apache-activemq-5.15.10/tmp | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:java.compiler=<NA> | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:os.name=Linux | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:os.arch=amd64 | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:os.version=3.13.0-24-generic | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,578 | INFO | Client environment:user.name=admin1 | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,579 | INFO | Client environment:user.home=/home/admin1 | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,579 | INFO | Client environment:user.dir=/usr/local/activemq/zookeeper-3.4.10/bin | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,579 | INFO | Initiating client connection, connectString=211.67.17.205:2181,211.67.19.187:2181,211.67.16.224:2181 sessionTimeout=2000 watcher=org.apache.activemq.leveldb.replicated.groups.ZKClient@dc7df28 | org.apache.zookeeper.ZooKeeper | main 2019-10-30 21:09:26,591 | WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/usr/local/activemq/apache-activemq-5.15.10/conf/login.config'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | org.apache.zookeeper.ClientCnxn | main-SendThread(211.67.19.187:2181) 2019-10-30 21:09:26,592 | INFO | Opening socket connection to server 211.67.19.187/211.67.19.187:2181 | org.apache.zookeeper.ClientCnxn | main-SendThread(211.67.19.187:2181) 2019-10-30 21:09:26,593 | WARN | unprocessed event state: AuthFailed | org.apache.activemq.leveldb.replicated.groups.ZKClient | main-EventThread 2019-10-30 21:09:26,595 | INFO | Socket connection established to 211.67.19.187/211.67.19.187:2181, initiating session | org.apache.zookeeper.ClientCnxn | main-SendThread(211.67.19.187:2181) 2019-10-30 21:09:26,611 | INFO | Session establishment complete on server 211.67.19.187/211.67.19.187:2181, sessionid = 0x26e1bcd90df0005, negotiated timeout = 4000 | org.apache.zookeeper.ClientCnxn | main-SendThread(211.67.19.187:2181)
求解 FLINK 集群 Standalone 模式 高可用部署无法启动。
##### 根据教程部署的hdfs,zookeeper,flink 集群。 ##### HDFS , zookeeper,工作正常,flink-standalone 启动正常。在搭建HA集群时,集群启动未报错,查看jps时发现没有进程,查看日志出现如下内容。(启动顺序zkServer.sh start ==> start-dfs.sh, start-yarn.sh bin/start-cluster.sh ) ##### 集群环境: Centos 7.3, Hadoop-2.8.5, java 1.8, scala-2.12, flink-1.9.0-2.12, zookeeper 3.4.14 ``` 2019-09-05 21:38:02,658 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.9.0, Rev:9c32ed9, Date:19.08.2019 @ 16:16:55 UTC) 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: root 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: <no hadoop dependency found> 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.221-b11 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 989 MiBytes 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - No Hadoop Dependency available 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/opt/software/flink/log/flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/opt/software/flink/conf/log4j.properties 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/opt/software/flink/conf/logback.xml 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /opt/software/flink/conf 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --host 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - master 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --webui-port 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 8081 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /opt/software/flink/lib/flink-table_2.12-1.9.0.jar:/opt/software/flink/lib/flink-table-blink_2.12-1.9.0.jar:/opt/software/flink/lib/log4j-1.2.17.jar:/opt/software/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/software/flink/lib/flink-dist_2.12-1.9.0.jar::/usr/hadoop/hadoop/etc/hadoop: 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,663 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more [root@master log]# vim flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more ``` ##### 环境变量已经配置如下(/etc/profile): ``` export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:/lib export PATH=$PATH:$JAVA_HOME/bin:. export ZOOKEEPER_HOME=/opt/software/zookeeper export PATH=$PATH:$ZOOKEEPER_HOME/bin:. export HADOOP_HOME=/usr/hadoop/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:. export PYTHON_HOME=/usr/local/python3 export PATH=$PATH:$PYTHON_HOME/bin:. export SCALA_HOME=/usr/local/scala export PATH=$PATH:/usr/local/scala/bin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_HOME=$HADOOP_HOME export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:/usr/local/lib:/usr/hadoop/hadoop/lib/native ``` ##### flinl-conf.yaml HA配置: ``` high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: master:2181,slave02:2181,slave03:2181 high-availability.zookeeper.path.root: /flink high-availability.cluster-id: /cluster_one ``` ##### 因日志中提到如下信息,猜测可能是环境变量或者hadoop 依赖路径的问题。 ``` 2019-09-06 11:44:39,820 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. …… 2019-09-06 11:44:41,943 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. ^ Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. …… 2019-09-06 11:44:42,075 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. ``` ##### 接下来在flink-conf.yaml文件中添加hdfs配置。 ``` fs.hdfs.hadoopconf: /usr/hadoop/hadoop/etc/hadoop fs.hdfs.hdfsdefault: /usr/hadoop/hadoop/etc/hadoop/hdfs-default.xml fs.hdfs.hdfssite: /usr/hadoop/hadoop/etc/hadoop/hdfs-site.xml ``` ##### flink集群依然无法按启动,日志内容与之前没有差别。 ##### 小弟在此向社区各路大神求教,如果有遇到相关情况,是否有解决办法。 ##### 非常感谢 ##### ps:flink on yarn 搭建方式尚未尝试。
驱动程序无法通过使用安全套接字层(SSL)加密与 SQL Server 建立安全连接
问题成因:jdk使用1.6,使用win10操作系统之后开始出现。之前用win7没有问题。bug截图如下 ![图片说明](https://img-ask.csdn.net/upload/201804/03/1522749587_28278.jpg) --------------------------------------- 因为项目原因必须使用jdk1.6,经研究使用jdk1.7没有问题。搜索之后采用了网上很多办法如下载bcprov-ext-jdk15on-1.54.jar和bcprov-jdk15on-1.54.jar,修改$JAVA_HOME\jre\lib\security/ java.security文件都没有解决,头皮发麻,怀疑是图片中标注的红色部分问题,但不知道怎么解决,请大神帮忙指导一下,谢谢
JAVA操作AD域 使用SSL连接出错 no subject alternative names matching IP address ?
**连接出错** javax.naming.CommunicationException: simple bind failed: 192.168.226.131:636 [Root exception is javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address 192.168.226.131 found] ``` public void certinit() { String keystore = "D:\\Java\\jdk1.8.0_211\\jre\\lib\\security\\cacerts"; System.setProperty("javax.net.ssl.trustStore", keystore); Properties env = new Properties(); String adminName = "cn=Administrator,cn=Users,dc=hct,dc=com"; String adminPassword = "Admin123456";// password String ldapURL = "ldap://192.168.226.131:636";// ip:port env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put(Context.SECURITY_AUTHENTICATION, "simple");// LDAP访问安全级别:"none","simple","strong" env.put(Context.SECURITY_PRINCIPAL, adminName); env.put(Context.SECURITY_CREDENTIALS, adminPassword); env.put(Context.PROVIDER_URL, ldapURL); env.put(Context.SECURITY_PROTOCOL, "ssl"); try { dc = new InitialLdapContext(env, null); System.out.println("AD域ssl身份认证成功"); } catch (Exception e) { System.out.println("AD域ssl身份认证失败"); e.printStackTrace(); } } ```
在中国程序员是青春饭吗?
今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...
《MySQL 性能优化》之理解 MySQL 体系结构
本文介绍 MySQL 的体系结构,包括物理结构、逻辑结构以及插件式存储引擎。
【资源】一个C/C++开发工程师的学习路线(已经无路可退,唯有逆风飞翔)【内附资源页】
声明: 1)该文章整理自网上的大牛和专家无私奉献的资料,具体引用的资料请看参考文献。 2)本文仅供学术交流,非商用。所以每一部分具体的参考资料并没有详细对应。如果某部分不小心侵犯了大家的利益,还望海涵,并联系博主删除。 3)博主才疏学浅,文中如有不当之处,请各位指出,共同进步,谢谢。 4)此属于第一版本,若有错误,还需继续修正与增删。还望大家多多指点。大家都共享一点点,一起为祖国科研的推进...
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
20道你必须要背会的微服务面试题,面试一定会被问到
写在前面: 在学习springcloud之前大家一定要先了解下,常见的面试题有那块,然后我们带着问题去学习这个微服务技术,那么就会更加理解springcloud技术。如果你已经学了springcloud,那么在准备面试的时候,一定要看看看这些面试题。 文章目录1、什么是微服务?2、微服务之间是如何通讯的?3、springcloud 与dubbo有哪些区别?4、请谈谈对SpringBoot 和S...
达摩院十大科技趋势发布:2020 非同小可!
【CSDN编者按】1月2日,阿里巴巴发布《达摩院2020十大科技趋势》,十大科技趋势分别是:人工智能从感知智能向认知智能演进;计算存储一体化突破AI算力瓶颈;工业互联网的超融合;机器间大规模协作成为可能;模块化降低芯片设计门槛;规模化生产级区块链应用将走入大众;量子计算进入攻坚期;新材料推动半导体器件革新;保护数据隐私的AI技术将加速落地;云成为IT技术创新的中心 。 新的画卷,正在徐徐展开。...
轻松搭建基于 SpringBoot + Vue 的 Web 商城应用
首先介绍下在本文出现的几个比较重要的概念: 函数计算(Function Compute): 函数计算是一个事件驱动的服务,通过函数计算,用户无需管理服务器等运行情况,只需编写代码并上传。函数计算准备计算资源,并以弹性伸缩的方式运行用户代码,而用户只需根据实际代码运行所消耗的资源进行付费。Fun: Fun 是一个用于支持 Serverless 应用部署的工具,能帮助您便捷地管理函数计算、API ...
Python+OpenCV实时图像处理
目录 1、导入库文件 2、设计GUI 3、调用摄像头 4、实时图像处理 4.1、阈值二值化 4.2、边缘检测 4.3、轮廓检测 4.4、高斯滤波 4.5、色彩转换 4.6、调节对比度 5、退出系统 初学OpenCV图像处理的小伙伴肯定对什么高斯函数、滤波处理、阈值二值化等特性非常头疼,这里给各位分享一个小项目,可通过摄像头实时动态查看各类图像处理的特点,也可对各位调参、测试...
2020年一线城市程序员工资大调查
人才需求 一线城市共发布岗位38115个,招聘120827人。 其中 beijing 22805 guangzhou 25081 shanghai 39614 shenzhen 33327 工资分布 2020年中国一线城市程序员的平均工资为16285元,工资中位数为14583元,其中95%的人的工资位于5000到20000元之间。 和往年数据比较: yea...
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
害怕面试被问HashMap?这一篇就搞定了!
声明:本文以jdk1.8为主! 搞定HashMap 作为一个Java从业者,面试的时候肯定会被问到过HashMap,因为对于HashMap来说,可以说是Java集合中的精髓了,如果你觉得自己对它掌握的还不够好,我想今天这篇文章会非常适合你,至少,看了今天这篇文章,以后不怕面试被问HashMap了 其实在我学习HashMap的过程中,我个人觉得HashMap还是挺复杂的,如果真的想把它搞得明明白...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
python爬取百部电影数据,我分析出了一个残酷的真相
2019年就这么匆匆过去了,就在前几天国家电影局发布了2019年中国电影市场数据,数据显示去年总票房为642.66亿元,同比增长5.4%;国产电影总票房411.75亿元,同比增长8.65%,市场占比 64.07%;城市院线观影人次17.27亿,同比增长0.64%。 看上去似乎是一片大好对不对?不过作为一名严谨求实的数据分析师,我从官方数据中看出了一点端倪:国产票房增幅都已经高达8.65%了,为什...
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试,面试官没想到一个ArrayList,我都能跟他扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
2020 年,大火的 Python 和 JavaScript 是否会被取而代之?
Python 和 JavaScript 是目前最火的两大编程语言,但是2020 年,什么编程语言将会取而代之呢? 作者 |Richard Kenneth Eng 译者 |明明如月,责编 | 郭芮 出品 | CSDN(ID:CSDNnews) 以下为译文: Python 和 JavaScript 是目前最火的两大编程语言。然而,他们不可能永远屹立不倒。最终,必将像其他编程语言一...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
神级宝库!GitHub 标星 1.2w+,Chrome 最天秀的插件都在这里啦!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个沉迷 Chrome 不能自拔的蒟蒻… 作为一个在远古时代用过什么 IE、360、猎豹等浏览器的资深器哥,当我第一次了解 Chrome 的时候,就被它的美貌给吸引住了… 就在我用了一段时间之后,我坚决的卸载了电脑上其它碍眼的浏览器,并觉得在之前的搬砖生涯中,我不配当哥,我只配是个沙雕… ...
作为一名大学生,如何在B站上快乐的学习?
B站是个宝,谁用谁知道???? 作为一名大学生,你必须掌握的一项能力就是自学能力,很多看起来很牛X的人,你可以了解下,人家私底下一定是花大量的时间自学的,你可能会说,我也想学习啊,可是嘞,该学习啥嘞,不怕告诉你,互联网时代,最不缺的就是学习资源,最宝贵的是啥? 你可能会说是时间,不,不是时间,而是你的注意力,懂了吧! 那么,你说学习资源多,我咋不知道,那今天我就告诉你一个你必须知道的学习的地方,人称...
那些年,我们信了课本里的那些鬼话
教材永远都是有错误的,从小学到大学,我们不断的学习了很多错误知识。 斑羚飞渡 在我们学习的很多小学课文里,有很多是错误文章,或者说是假课文。像《斑羚飞渡》: 随着镰刀头羊的那声吼叫,整个斑羚群迅速分成两拨,老年斑羚为一拨,年轻斑羚为一拨。 就在这时,我看见,从那拨老斑羚里走出一只公斑羚来。公斑羚朝那拨年轻斑羚示意性地咩了一声,一只半大的斑羚应声走了出来。一老一少走到伤心崖,后退了几步,突...
张朝阳回应迟到 1 分钟罚 500:资本家就得剥削员工
loonggg读完需要2分钟速读仅需 1 分钟大家我,我是你们的校长。前几天,搜狐的董事局主席兼 CEO 张朝阳和搜狐都上热搜了。原因很简单,就是搜狐出了“考勤新规”。一封搜狐对员工发布...
立即提问