kafka启动报错,java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V

试过很多方法,降级zk使其和kafka依赖的版本保持一致;
zk3.414 ,kafka2.3
删除了scala的环境变量,依然不行;
java_home只有一个
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V
at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:238)
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$2(ZooKeeperClient.scala:160)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1(ZooKeeperClient.scala:160)
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1$adapted(ZooKeeperClient.scala:156)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:156)
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1660)
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1647)
at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1642)
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1712)
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689)
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97)
at kafka.server.KafkaServer.startup(KafkaServer.scala:262)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
flume开启报错java.lang.SecurityException: sealing violation: package org.apache.flume.conf is sealed
开启了hadoop,zk,kafka之后,配置了conf,flume消费kafka中产生的信息 开启flume的命令: flume-ng agent --conf /home/hduser/apps/flume/conf --conf-file /home/hduser/apps/flume/conf/applog --name a1 -Dflume.root.logger=INFO,console 第一次开启 报错信息: ``` 2019-04-27 13:20:32,643 (main) [ERROR - org.apache.flume.node.Application.main(Application.java:374)] A fatal error occurred while running. Exception follows. java.lang.SecurityException: sealing violation: package org.apache.flume.conf is sealed at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:399) at java.net.URLClassLoader.definePackageInternal(URLClassLoader.java:419) at java.net.URLClassLoader.defineClass(URLClassLoader.java:451) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.flume.node.Application.main(Application.java:350) ``` 上网查了老半天,实在没办法了,有大佬知道怎么办吗? 这问题就是说package org.apache.flume.conf is sealed被封装吗? 多次开启后的错误显示: ``` (main) [ERROR - org.apache.flume.node.Application.main(Application.java:374)] A fatal error occurred while running. Exception follows. java.lang.SecurityException: sealing violation: can't seal package org.apache.flume.conf: already loaded at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:406) at java.net.URLClassLoader.definePackageInternal(URLClassLoader.java:419) at java.net.URLClassLoader.defineClass(URLClassLoader.java:451) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.flume.node.Application.main(Application.java:350) ``` 说那个已经被加载了 更蒙 了 求解决!
kafka和spring集成问题:Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties
springboot 和kafka集成提示Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties错误 详细如下: Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:507) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:367) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.buildLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:208) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.findLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:189) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(InitDestroyAnnotationBeanPostProcessor.java:128) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(CommonAnnotationBeanPostProcessor.java:297) ~[spring-context-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:1013) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:547) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 15 common frames omitted Caused by: java.lang.NoClassDefFoundError: org/springframework/kafka/listener/config/ContainerProperties at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_131] at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_131] at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_131] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:489) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 22 common frames omitted Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_131] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_131] ... 26 common frames omitted ``` ```
kafka集成storm出现异常
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) at backtype.storm.daemon.executor$fn__6579$fn__6594$fn__6623.invoke(executor.clj:565) at backtype.storm.util$async_loop$fn__459.invoke(util.clj:463) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:81) at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:79) ... 6 more Caused by: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.DynamicBrokersReader.getLeaderFor(DynamicBrokersReader.java:120) at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:68) ... 7 more
storm kafka整合报错,请大神帮忙看看啥情况
70308 [Thread-29-spout-read-kafka] INFO s.k.ZkCoordinator - Task [1/1] New partition managers: [Partition{host=pamshost02:9092, partition=9}, Partition{host=pamshost02:9092, partition=8}, Partition{host=pamshost02:9092, partition=7}, Partition{host=pamshost02:9092, partition=6}, Partition{host=pamshost02:9092, partition=5}, Partition{host=pamshost02:9092, partition=4}, Partition{host=pamshost02:9092, partition=3}, Partition{host=pamshost02:9092, partition=0}, Partition{host=pamshost02:9092, partition=1}, Partition{host=pamshost02:9092, partition=2}] 70599 [Thread-29-spout-read-kafka] INFO s.k.PartitionManager - Read partition information from: /detect/readKafka/partition_9 --> null 91326 [Thread-29-spout-read-kafka] INFO k.c.SimpleConsumer - Reconnect due to socket error: java.nio.channels.ClosedChannelException 91327 [Thread-29-spout-read-kafka] ERROR b.s.util - Async loop died! java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91329 [Thread-29-spout-read-kafka] ERROR b.s.d.executor - java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91535 [Thread-29-spout-read-kafka] ERROR b.s.util - Halting process: ("Worker died") java.lang.RuntimeException: ("Worker died") at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:336) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.6.0.jar:?] at backtype.storm.daemon.worker$fn__7184$fn__7185.invoke(worker.clj:532) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$mk_executor_data$fn__5523$fn__5524.invoke(executor.clj:261) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:489) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75]
springBoot整合kafka启动报错
报错日志如下,groupId=MyGroupId时可以正常连接,但是groupId=etl时就连接不到报错了,麻烦大神看看,如果需要别的相关文件窝在提供,在线等,给跪了 ``` _ _ _ _ _ _ __ _ ___ ____ | | | (_) | | | | (_)/ _| | | |__ \ |___ \ | |__| |_ | | | |_ __ _| |_ __ _ ___| |_ ) | __) | | __ | | | | | | '_ \| | _/ _` / __| __| / / |__ < | | | | | | |__| | | | | | || (_| \__ \ |_ / /_ _ ___) | |_| |_|_| \____/|_| |_|_|_| \__,_|___/\__| |____(_)____/ 2020-02-21 23:53:40.414 mallSearch [main] INFO c.u.j.c.EnableEncryptablePropertiesConfiguration - Bootstraping jasypt-string-boot auto configuration in context: application-1 2020-02-21 23:53:40.414 mallSearch [main] INFO c.c.mall.mallsearch.ReportManagerApplication - The following profiles are active: dev 2020-02-21 23:53:42.148 mallSearch [main] WARN org.mybatis.spring.mapper.ClassPathMapperScanner - No MyBatis mapper was found in '[cn.chinaunicom.sdsi.**.entity, cn.chinaunicom.mall.**.entity]' package. Please check your configuration. 2020-02-21 23:53:42.476 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode! 2020-02-21 23:53:42.476 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data repositories in DEFAULT mode. 2020-02-21 23:53:42.695 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 219ms. Found 9 repository interfaces. 2020-02-21 23:53:42.710 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode! 2020-02-21 23:53:42.710 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data repositories in DEFAULT mode. 2020-02-21 23:53:42.804 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.AttrCategoryDocumentRepository. 2020-02-21 23:53:42.804 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.AttrDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.AttrValueDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.BrandDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.CategorytreeDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsPoolComSpuDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsSkuDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsSpuCategorytreeDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsSpuDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 31ms. Found 0 repository interfaces. 2020-02-21 23:53:43.522 mallSearch [main] INFO o.springframework.cloud.context.scope.GenericScope - BeanFactory id=f1ad9868-69a4-3087-8cf7-8a78137ec329 2020-02-21 23:53:43.554 mallSearch [main] INFO c.u.j.c.EnableEncryptablePropertiesBeanFactoryPostProcessor - Post-processing PropertySource instances 2020-02-21 23:53:43.601 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource configurationProperties [org.springframework.boot.context.properties.source.ConfigurationPropertySourcesPropertySource] to AOP Proxy 2020-02-21 23:53:43.601 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource servletConfigInitParams [org.springframework.core.env.PropertySource$StubPropertySource] to EncryptablePropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource servletContextInitParams [org.springframework.core.env.PropertySource$StubPropertySource] to EncryptablePropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource systemProperties [org.springframework.core.env.MapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource systemEnvironment [org.springframework.boot.env.SystemEnvironmentPropertySourceEnvironmentPostProcessor$OriginAwareSystemEnvironmentPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource random [org.springframework.boot.env.RandomValuePropertySource] to EncryptablePropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource applicationConfig: [classpath:/application-dev.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource applicationConfig: [classpath:/config/application.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource applicationConfig: [classpath:/application.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource springCloudClientHostInfo [org.springframework.core.env.MapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource defaultProperties [org.springframework.core.env.MapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.694 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$46cfe68c] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:43.710 mallSearch [main] INFO c.u.j.filter.DefaultLazyPropertyFilter - Property Filter custom Bean not found with name 'encryptablePropertyFilter'. Initializing Default Property Filter 2020-02-21 23:53:43.850 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration' of type [org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration$$EnhancerBySpringCGLIB$$bcb9d43] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.069 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'objectPostProcessor' of type [org.springframework.security.config.annotation.configuration.AutowireBeanFactoryObjectPostProcessor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.085 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler@70f5f59d' of type [org.springframework.security.oauth2.provider.expression.OAuth2MethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.085 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration$$EnhancerBySpringCGLIB$$30a03ff5] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.132 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.163 mallSearch [main] INFO c.u.j.resolver.DefaultLazyPropertyResolver - Property Resolver custom Bean not found with name 'encryptablePropertyResolver'. Initializing Default Property Resolver 2020-02-21 23:53:44.163 mallSearch [main] INFO c.u.j.detector.DefaultLazyPropertyDetector - Property Detector custom Bean not found with name 'encryptablePropertyDetector'. Initializing Default Property Detector 2020-02-21 23:53:44.179 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'spring.cache-org.springframework.boot.autoconfigure.cache.CacheProperties' of type [org.springframework.boot.autoconfigure.cache.CacheProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.194 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'unifastRedisCacheConfig' of type [cn.chinaunicom.sdsi.security.cache.config.UnifastRedisCacheConfig$$EnhancerBySpringCGLIB$$7185313] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.335 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$8f37d806] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:45.006 mallSearch [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 9011 (http) 2020-02-21 23:53:45.022 mallSearch [main] INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-9011"] 2020-02-21 23:53:45.038 mallSearch [main] INFO org.apache.catalina.core.StandardService - Starting service [Tomcat] 2020-02-21 23:53:45.038 mallSearch [main] INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.16] 2020-02-21 23:53:45.053 mallSearch [main] INFO org.apache.catalina.core.AprLifecycleListener - The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [C:\Program Files\Java\jdk1.8.0_144\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Program Files (x86)\Intel\iCLS Client\;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Intel\iCLS Client\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\Java\jdk1.8.0_144\bin;C:\Program Files\Git\cmd;C:\Program Files\apache-maven-3.2.2\bin;C:\Program Files\Mysql\bin;C:\Program Files\MySQL\MySQL Utilities 1.6\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\TortoiseSVN\bin;D:\工具\curl-7.59.0-win64-mingw\bin;C:\WINDOWS\System32\OpenSSH\;C:\Windows\WinSxS\amd64_microsoft-windows-telnet-client_31bf3856ad364e35_10.0.17134.1_none_9db21dbc8e34d070;C:\Program Files\nodejs\;D:\nginx-1.13.7;C:\Program Files\erl10.4\bin;D:\工具\rabbitmq_server-3.7.15\sbin;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\TortoiseGit\bin;C:\Users\xiaolei\AppData\Local\Microsoft\WindowsApps;;C:\Users\xiaolei\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\xiaolei\AppData\Roaming\npm;.] 2020-02-21 23:53:45.350 mallSearch [main] INFO o.a.c.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext 2020-02-21 23:53:45.350 mallSearch [main] INFO org.springframework.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 4905 ms 2020-02-21 23:53:47.034 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - no modules loaded 2020-02-21 23:53:47.036 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin] 2020-02-21 23:53:47.036 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.join.ParentJoinPlugin] 2020-02-21 23:53:47.039 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.percolator.PercolatorPlugin] 2020-02-21 23:53:47.039 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.script.mustache.MustachePlugin] 2020-02-21 23:53:47.039 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.transport.Netty4Plugin] 2020-02-21 23:53:49.084 mallSearch [main] INFO o.s.d.e.client.TransportClientFactoryBean - Adding transport node : 10.236.6.52:9200 2020-02-21 23:54:20.425 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:20.940 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:20.971 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.034 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.065 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.112 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.315 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.362 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.643 mallSearch [main] INFO org.redisson.Version - Redisson 3.12.0 2020-02-21 23:54:21.909 mallSearch [redisson-netty-4-24] INFO o.r.connection.pool.MasterPubSubConnectionPool - 1 connections initialized for 10.236.6.54/10.236.6.54:6379 2020-02-21 23:54:21.956 mallSearch [redisson-netty-4-28] INFO org.redisson.connection.pool.MasterConnectionPool - 20 connections initialized for 10.236.6.54/10.236.6.54:6379 2020-02-21 23:54:22.690 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:22.971 mallSearch [main] INFO s.d.s.w.PropertySourcedRequestMappingHandlerMapping - Mapped URL path [/v2/api-docs] onto method [public org.springframework.http.ResponseEntity<springfox.documentation.spring.web.json.Json> springfox.documentation.swagger2.web.Swagger2Controller.getDocumentation(java.lang.String,javax.servlet.http.HttpServletRequest)] 2020-02-21 23:54:23.408 mallSearch [main] INFO o.s.security.web.DefaultSecurityFilterChain - Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2408ca4c, org.springframework.security.web.context.SecurityContextPersistenceFilter@29509774, org.springframework.security.web.header.HeaderWriterFilter@3741a170, org.springframework.security.web.authentication.logout.LogoutFilter@1988e095, org.springframework.security.oauth2.provider.authentication.OAuth2AuthenticationProcessingFilter@4870d2e1, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@9198fe3, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4dfe14d4, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@2f7e2481, org.springframework.security.web.session.SessionManagementFilter@26c24e5, org.springframework.security.web.access.ExceptionTranslationFilter@4a467f08, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@5de15ce1] _____ __________ __________________ _______ ________ ______________ __ / / /___ __ \___ ____/___ __ \__ __ \___ __ \___ __/__|__ \ _ / / / __ /_/ /__ __/ __ /_/ /_ / / /__ /_/ /__ / ____/ / / /_/ / _ _, _/ _ /___ _ ____/ / /_/ / _ _, _/ _ / _ __/ \____/ /_/ |_| /_____/ /_/ \____/ /_/ |_| /_/ /____/ ........................................................................................................ . uReport, is a Chinese style report engine licensed under the Apache License 2.0, . . which is opensource, easy to use,high-performance, with browser-based-designer. . ........................................................................................................ 2020-02-21 23:54:25.330 mallSearch [main] WARN com.netflix.config.sources.URLConfigurationSource - No URLs will be polled as dynamic configuration sources. 2020-02-21 23:54:25.330 mallSearch [main] INFO com.netflix.config.sources.URLConfigurationSource - To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath. 2020-02-21 23:54:25.345 mallSearch [main] WARN com.netflix.config.sources.URLConfigurationSource - No URLs will be polled as dynamic configuration sources. 2020-02-21 23:54:25.345 mallSearch [main] INFO com.netflix.config.sources.URLConfigurationSource - To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath. 2020-02-21 23:54:25.720 mallSearch [main] INFO o.s.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'applicationTaskExecutor' 2020-02-21 23:54:27.095 mallSearch [main] WARN o.s.b.a.freemarker.FreeMarkerAutoConfiguration - Cannot find template location(s): [classpath:/templates/] (please add some templates, check your FreeMarker configuration, or set spring.freemarker.checkTemplateLocation=false) 2020-02-21 23:54:29.282 mallSearch [main] INFO o.s.cloud.netflix.eureka.InstanceInfoFactory - Setting initial instance status as: STARTING 2020-02-21 23:54:29.360 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Initializing Eureka in region us-east-1 2020-02-21 23:54:29.438 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using JSON encoding codec LegacyJacksonJson 2020-02-21 23:54:29.438 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using JSON decoding codec LegacyJacksonJson 2020-02-21 23:54:29.610 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using XML encoding codec XStreamXml 2020-02-21 23:54:29.610 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using XML decoding codec XStreamXml 2020-02-21 23:54:29.954 mallSearch [main] INFO c.n.d.shared.resolver.aws.ConfigClusterResolver - Resolving eureka endpoints via configuration 2020-02-21 23:54:30.266 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Disable delta property : false 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Single vip registry refresh property : null 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Force full registry fetch : false 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Application is null : false 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Registered Applications size is zero : true 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Application version is -1: true 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Getting all instance registry info from the eureka server 2020-02-21 23:54:30.828 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - The response status is 200 2020-02-21 23:54:30.844 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Starting heartbeat executor: renew interval is: 30 2020-02-21 23:54:30.844 mallSearch [main] INFO com.netflix.discovery.InstanceInfoReplicator - InstanceInfoReplicator onDemand update allowed rate per min is 4 2020-02-21 23:54:30.844 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Discovery Client initialized at timestamp 1582300470844 with initial instances count: 8 2020-02-21 23:54:30.875 mallSearch [main] INFO o.s.c.n.e.serviceregistry.EurekaServiceRegistry - Registering application reportmanager with eureka with status UP 2020-02-21 23:54:30.875 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Saw local status change event StatusChangeEvent [timestamp=1582300470875, current=UP, previous=STARTING] 2020-02-21 23:54:30.907 mallSearch [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 1000 auto.offset.reset = earliest bootstrap.servers = [10.236.6.52:9092] check.crcs = true client.id = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = myGroupId heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2020-02-21 23:54:31.016 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.1 2020-02-21 23:54:31.016 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fa14705e51bd2ce5 2020-02-21 23:54:31.313 mallSearch [DiscoveryClient-InstanceInfoReplicator-0] INFO com.netflix.discovery.DiscoveryClient - DiscoveryClient_REPORTMANAGER/192.168.62.1:9011: registering service... 2020-02-21 23:54:31.547 mallSearch [DiscoveryClient-InstanceInfoReplicator-0] INFO com.netflix.discovery.DiscoveryClient - DiscoveryClient_REPORTMANAGER/192.168.62.1:9011 - registration status: 204 2020-02-21 23:54:31.656 mallSearch [main] INFO org.apache.kafka.clients.Metadata - Cluster ID: rgWO07ohQH6Rn35woczhUQ 2020-02-21 23:54:31.719 mallSearch [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 1000 auto.offset.reset = earliest bootstrap.servers = [10.236.6.52:9092] check.crcs = true client.id = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = myGroupId heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2020-02-21 23:54:31.719 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.1 2020-02-21 23:54:31.719 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fa14705e51bd2ce5 2020-02-21 23:54:31.719 mallSearch [main] INFO o.s.scheduling.concurrent.ThreadPoolTaskScheduler - Initializing ExecutorService 2020-02-21 23:54:31.734 mallSearch [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [127.0.0.1:9092] check.crcs = true client.id = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = etl heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 180000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 120000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2020-02-21 23:54:31.734 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.1 2020-02-21 23:54:31.750 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fa14705e51bd2ce5 2020-02-21 23:54:32.016 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO org.apache.kafka.clients.Metadata - Cluster ID: rgWO07ohQH6Rn35woczhUQ 2020-02-21 23:54:32.047 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Discovered group coordinator 10.236.6.52:9092 (id: 2147483646 rack: null) 2020-02-21 23:54:32.063 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Revoking previously assigned partitions [] 2020-02-21 23:54:32.063 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.s.kafka.listener.KafkaMessageListenerContainer - partitions revoked: [] 2020-02-21 23:54:32.063 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] (Re-)joining group 2020-02-21 23:54:32.391 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Successfully joined group with generation 21 2020-02-21 23:54:32.406 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Setting newly assigned partitions [es-mall-brand-update-0] 2020-02-21 23:54:32.625 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.s.kafka.listener.KafkaMessageListenerContainer - partitions assigned: [es-mall-brand-update-0] 2020-02-21 23:54:32.828 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:33.969 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:35.219 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:36.468 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:37.937 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:39.764 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:41.904 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:44.185 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:46.372 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:48.309 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:50.231 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:52.152 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:54.370 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:56.495 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:58.416 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:00.588 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:02.602 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:04.540 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:06.570 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:08.617 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:10.507 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:12.459 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:14.506 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:16.631 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:18.802 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:20.708 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:22.848 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:24.988 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:27.113 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:29.346 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:31.611 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:31.750 mallSearch [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 2020-02-21 23:55:31.782 mallSearch [main] INFO o.s.scheduling.concurrent.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor' 2020-02-21 23:55:32.516 mallSearch [main] INFO o.s.jmx.export.annotation.AnnotationMBeanExporter - Could not unregister MBean [com.github.tobato.fastdfs.conn:name=fdfsConnectionPool,type=FdfsConnectionPool] as said MBean is not registered (perhaps already unregistered by an external process) 2020-02-21 23:56:00.384 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Shutting down DiscoveryClient ... 2020-02-21 23:56:00.431 mallSearch [main] WARN o.s.c.annotation.CommonAnnotationBeanPostProcessor - Destroy method on bean with name 'scopedTarget.eurekaClient' threw an exception: org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'eurekaInstanceConfigBean': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!) 2020-02-21 23:56:00.431 mallSearch [main] INFO org.apache.catalina.core.StandardService - Stopping service [Tomcat] 2020-02-21 23:56:00.462 mallSearch [main] INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener - Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2020-02-21 23:56:00.478 mallSearch [main] ERROR org.springframework.boot.SpringApplication - Application run failed org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185) at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53) at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360) at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158) at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122) at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:893) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:163) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) at org.springframework.boot.SpringApplication.run(SpringApplication.java:316) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248) at cn.chinaunicom.mall.mallsearch.ReportManagerApplication.main(ReportManagerApplication.java:85) Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 2020-02-21 23:56:01.634 mallSearch [DiscoveryClient-InstanceInfoReplicator-0] WARN com.netflix.discovery.InstanceInfoReplicator - There was a problem with the instance info replicator org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'eurekaInstanceConfigBean' defined in class path resource [org/springframework/cloud/netflix/eureka/EurekaClientAutoConfiguration.class]: Unsatisfied dependency expressed through method 'eurekaInstanceConfigBean' parameter 0; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'inetUtilsProperties': Could not bind properties to 'InetUtilsProperties' : prefix=spring.cloud.inetutils, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is java.lang.IllegalStateException: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@31c2affc has not been refreshed yet at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:509) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1305) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1144) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) at org.springframework.beans.factory.support.ConstructorResolver.resolvePreparedArguments(ConstructorResolver.java:804) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:430) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1305) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1144) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$1(AbstractBeanFactory.java:356) at org.springframework.cloud.context.scope.GenericScope$BeanLifecycleWrapper.getBean(GenericScope.java:390) at org.springframework.cloud.context.scope.GenericScope.get(GenericScope.java:184) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:353) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:672) at com.netflix.appinfo.ApplicationInfoManager$$EnhancerBySpringCGLIB$$918f0f04.refreshDataCenterInfoIfRequired(<generated>) at com.netflix.discovery.DiscoveryClient.refreshInstanceInfo(DiscoveryClient.java:1377) at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:117) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) at java.util.concurrent.FutureTask.run(FutureTask.java) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'inetUtilsProperties': Could not bind properties to 'InetUtilsProperties' : prefix=spring.cloud.inetutils, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is java.lang.IllegalStateException: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@31c2affc has not been refreshed yet at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.bind(ConfigurationPropertiesBindingPostProcessor.java:110) at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.postProcessBeforeInitialization(ConfigurationPropertiesBindingPostProcessor.java:93) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:414) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1754) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) at org.springframework.beans.factory.support.ConstructorResolver.resolvePreparedArguments(ConstructorResolver.java:804) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:430) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1305) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1144) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760) ... 37 common frames omitted Caused by: java.lang.IllegalStateException: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@31c2affc has not been refreshed yet at org.springframework.context.support.AbstractApplicationContext.assertBeanFactoryActive(AbstractApplicationContext.java:1092) at org.springframework.context.support.AbstractApplicationContext.getBeanProvider(AbstractApplicationContext.java:1134) at org.springframework.boot.context.properties.ConfigurationPropertiesBinder.getBindHandlerAdvisors(ConfigurationPropertiesBinder.java:138) at org.springframework.boot.context.properties.ConfigurationPropertiesBinder.getBindHandler(ConfigurationPropertiesBinder.java:130) at org.springframework.boot.context.properties.ConfigurationPropertiesBinder.bind(ConfigurationPropertiesBinder.java:82) at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.bind(ConfigurationPropertiesBindingPostProcessor.java:107) ... 65 common frames omitted Disconnected from the target VM, address: '127.0.0.1:62410', transport: 'socket' ```
使用kafka connect ,mysql作为输入,输出也是mysql 报错record value schema is missing
使用kafka connect ,mysql作为输入,输出也是mysql 报错record value schema is missing 报错代码如下: [2019-10-31 14:37:32,956] ERROR WorkerSinkTask{id=mysql-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask:544) org.apache.kafka.connect.errors.ConnectException: PK mode for table 'dim_channel_copy' is RECORD_VALUE, but record value schema is missing     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extractRecordValuePk(FieldsMetadata.java:238)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:102)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)     at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:71)     at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)     at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)     at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:524)     at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:302)     at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)     at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)     at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)     at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)     at java.util.concurrent.FutureTask.run(FutureTask.java:266)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)     at java.lang.Thread.run(Thread.java:748) [2019-10-31 14:37:32,957] ERROR WorkerSinkTask{id=mysql-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172) org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.     at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)     at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:302)     at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)     at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)     at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)     at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)     at java.util.concurrent.FutureTask.run(FutureTask.java:266)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)     at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.ConnectException: PK mode for table 'dim_channel_copy' is RECORD_VALUE, but record value schema is missing     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extractRecordValuePk(FieldsMetadata.java:238)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:102)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)     at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:71)     at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)     at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)     at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:524)     ... 10 more [2019-10-31 14:37:32,958] ERROR WorkerSinkTask{id=mysql-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:173) [2019-10-31 14:37:32,958] INFO Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask:100)
kafka.common.KafkaException:
package com; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import kafka.serializer.StringEncoder; public class kafkaProducer extends Thread{ private String topic; public kafkaProducer(String topic){ super(); this.topic = topic; } @Override public void run() { Producer producer = createProducer(); int i=0; while(true){ producer.send(new KeyedMessage<Integer, String>(topic, "message: " + i++)); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } } private Producer createProducer() { Properties properties = new Properties(); properties.put("zookeeper.connect", "localhost:2181");//声明zk properties.put("serializer.class", StringEncoder.class.getName()); properties.put("metadata.broker.list", "localhost:9092");// 声明kafka broker return new Producer<Integer, String>(new ProducerConfig(properties)); } public static void main(String[] args) { new kafkaProducer("test").start();// 使用kafka集群中创建好的主题 test } } kafka.common.KafkaException: fetching topic metadata for topics [Set(test)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72) at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67) at kafka.utils.Utils$.swallow(Utils.scala:172) at kafka.utils.Logging$class.swallowError(Logging.scala:106) at kafka.utils.Utils$.swallowError(Utils.scala:45) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.kafkaProducer.run(kafkaProducer.java:29) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) at kafka.producer.SyncProducer.send(SyncProducer.scala:113) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) ... 9 more ``` ```
kafka数据上传hbase的问题
我使用的环境是hdp的伪分布集群 我的项目是flume采集数据发送到kafka的各个topic当中 再由jar文件使得从kafka当中获取数据 发送到hbase做持久化 然后因为数据量颇大 每次传个半个小时的数据 regionserver就挂掉了 项目是肯定没问题的 因为目前在学习阶段 别人是可以执行且不报错的 问题如下所示 ``` java.io.FileNotFoundException: File /tmp/hbase-root/hbase/lib does not exist at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:431) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:674) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:178) [hbase-common-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:142) [hbase-common-1.1.2.jar!/:1.1.2] at java.lang.Class.forName0(Native Method) [na:1.8.0_161] at java.lang.Class.forName(Class.java:348) [na:1.8.0_161] at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1543) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.protobuf.ResponseConverter.getResults(ResponseConverter.java:120) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:134) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:54) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:708) [hbase-client-1.1.2.jar!/:1.1.2] ``` 他突然开始寻找 File /tmp/hbase-root/hbase/lib does not exist 这个路径的文件 我的项目中并没有从这个路径下寻找文件 我前往到这个路径 路径是空的 就是根本没有这个路径 然后我前往hbase的log中查看 hbase来了一套组合拳 ``` 2020-03-21 19:29:49,789 ERROR [Thread-19] util.PolicyRefresher: PolicyRefresher(serviceName=Sandbox_hbase): failed to refresh policies. Will continue to use last known version of policies (6) com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 8 more ``` 然后就是读取超时 ``` com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) ``` 然后就是最匪夷所思的异常 ``` 2020-03-21 19:33:36,252 ERROR [Thread-19] util.PolicyRefresher: PolicyRefresher(serviceName=Sandbox_hbase): failed to refresh policies. Will continue to use last known version of policies (6) com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 8 more ``` 求大佬解答
这个问题怎么解决,docker搭建kafka的wen'ti
首先说明这个错误的前提,我没有自己在虚拟机上搭建,因为华为送了服务器,我就直接在它的服务器上搭建了docker,弄了三个容器装了kafka,直接使用docker-compose搭建集群  映射的端口就是这样子,但是呢,在IDEA连接kafka集群的时候 首先连接IP:5000,5002,5004 再连接返回的host.name =kafka1,kafka2,kafka3 最后继续连接advertised.host.name=kafka1,kafka2,kafka3 这样的情况,如果是普通服务器还好,直接在本地hosts添加主机IP映射即可 但是这个容器就添加不了了,容器的IP地址是内网设定的,我们本地访问ip肯定访问不到了。 20/01/16 22:11:04 INFO AppInfoParser: Kafka version: 2.4.0 20/01/16 22:11:04 INFO AppInfoParser: Kafka commitId: 77a89fcf8d7fa018 20/01/16 22:11:04 INFO AppInfoParser: Kafka startTimeMs: 1579183864167 20/01/16 22:11:04 INFO KafkaConsumer: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Subscribed to topic(s): test, topicongbo 20/01/16 22:11:04 INFO Metadata: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Cluster ID: Kkwgy0gkSkmGAlsC_5cz9A 20/01/16 22:11:04 INFO AbstractCoordinator: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Discovered group coordinator kafka3:9092 (id: 2147483644 rack: null) 20/01/16 22:11:06 WARN NetworkClient: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Error connecting to node kafka3:9092 (id: 2147483644 rack: null) java.net.UnknownHostException: kafka3 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363) at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:572) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:757) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:737) at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204) at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:599) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:409) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:294) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.paranoidPoll(DirectKafkaInputDStream.scala:172) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:260) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7(DStreamGraph.scala:54) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7$adapted(DStreamGraph.scala:54) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:145) at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:974) at scala.collection.parallel.Task.$anonfun$tryLeaf$1(Tasks.scala:53) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at scala.util.control.Breaks$$anon$1.catchBreak(Breaks.scala:67) at scala.collection.parallel.Task.tryLeaf(Tasks.scala:56) at scala.collection.parallel.Task.tryLeaf$(Tasks.scala:50) at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:971) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:153) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 那么这个错误怎么解决的呢,而且华为的安全组我没有权限修改,只能5000-5010的端口对外开方
windows环境下kafka创建topic后重启就报错
[2017-12-20 17:20:15,475] WARN Found a corrupted index file due to requirement failed: Corrupt index found, index file (D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index) has non-zero size but the last offset is 0 which is no larger than the base offset 0.}. deleting D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex, D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index, and D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.txnindex and rebuilding index... (kafka.log.Log) [2017-12-20 17:20:15,475] ERROR Error while loading log dir D:\program\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager) java.nio.file.FileSystemException: D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex: 另一 个程序正在使用此文件,进程无法访问。 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269) at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108) at java.nio.file.Files.deleteIfExists(Files.java:1165) at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:335) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:191) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788) at kafka.log.Log.loadSegmentFiles(Log.scala:297) at kafka.log.Log.loadSegments(Log.scala:406) at kafka.log.Log.<init>(Log.scala:203) at kafka.log.Log$.apply(Log.scala:1735) at kafka.log.LogManager.loadLog(LogManager.scala:231) at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:292) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
kafka执行报错报错,大神帮忙
Exception in thread "main" org.apache.spark.SparkException: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException org.apache.spark.SparkException: Couldn't find leader offsets for Set([ts_bg0_type0,0], [ts_bg0_type0,1], [ts_bg0_type0,2]) at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385) at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385) at scala.util.Either.fold(Either.scala:98) at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:384) at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222) at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484) at cn.test.Kafka_Streaming_cv$.main(Kafka_Streaming_cv.scala:49) at cn.test.Kafka_Streaming_cv.main(Kafka_Streaming_cv.scala) ``` ```
kafka ArrayIndexOutOfBoundsException: 18
在kafka上出了下面这个问题,上网查了下都说是新版的kafka clien向旧版的kafka发送请求,旧版的kafka(<0.10)不支持ApiVersion(key:18) Request,造成的,但是我所有的produce,consumer,kafka服务器上装的kafka clien都是0.9.0.1,应该不会出现这个问题才对,为什么?求各位大神指点 ``` [2018-10-25 10:03:17,919] INFO [Kafka Server 0], started (kafka.server.KafkaServer) [2018-10-25 10:03:18,080] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [topic-test,0] (kafka.server.ReplicaFetcherManager) [2018-10-25 10:03:18,099] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [topic-test,0] (kafka.server.ReplicaFetcherManager) [2018-10-25 10:03:48,864] ERROR Processor got uncaught exception. (kafka.network.Processor) java.lang.ArrayIndexOutOfBoundsException: 18 at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68) at org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39) at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:79) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421) at scala.collection.Iterator$class.foreach(Iterator.scala:742) at scala.collection.AbstractIterator.foreach(Iterator.scala:1194) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.network.Processor.run(SocketServer.scala:421) at java.lang.Thread.run(Thread.java:748) ```
SparkStreming向kafka写数据,报 Error in I/O......
我按照kafka官网部署了单节点kafka0.8.2.1,部署命令依次如下: //启动自带的zookeeper bin/zookeeper-server-start.sh config/zookeeper.properties & //启动kafka服务 bin/kafka-server-start.sh config/server.properties & //创建topic bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test & 这是如果按官网的命令,启动一个生产者,写入数据,然后启动一个消费者,可以正常消费数据。 接下来我在spark1.5.2(通过官网文档说对应的kafka是0.8.2.1),所以我用的所有jar包都是0.8.2.1(至少kafka方面的都是0.8.2.1),代码如下 val sparkConf = new SparkConf().setAppName("kafka") val ssc = new StreamingContext(sparkConf, Seconds(10)) val properties = new Properties() properties.put("bootstrap.servers", "100.173.249.68:2181") properties.put("metadata.broker.list", "100.173.249.68:9092") properties.put("group.id", "test-consumer-group") properties.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer") properties.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer") val producer = new KafkaProducer[String, String](properties) val record = new ProducerRecord[String, String]("test", "push data") producer.send(record) ssc.start() ssc.awaitTermination() 将该代码达成jar包到服务器上运行报错,kafka端报错如下: INFO Accepted socket connection from /10.173.249.68:58489 (org.apache.zookeeper.server.NIOServerCnxnFactory) WARN Exception causing close of session 0x0 due to java.io.EOFException (org.apache.zookeeper.server.NIOServerCnxn) INFO Closed socket connection for client /10.173.249.68:58489 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn) 这些报错,不停的刷 同时spark处报错也不停的刷,如下, WARN network.Selector: Error in I/O with chenxm/10.173.249.68 java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) at org.apache.kafka.common.network.Selector.poll(Selector.java:248) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) at java.lang.Thread.run(Thread.java:745) 不知道什么原因,拜托,多谢
运行flume的agent,出现如下错误
我的代码: ``` agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=spooldir agent.sources.s1.spoolDir=/tmp/logs/tomcat2kafka agent.sources.s1.channels=c1 agent.channels.c1.type=memory agent.channels.c1.capacity=10000 agent.channels.c1.transactionCapacity=100 #设置Kafka接收 agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink #设置Kafka的broker地址和端口号 agent.sinks.k1.brokerList=222.30.194.254:9092 #设置Kafka的Topic agent.sinks.k1.topic=kafkatest2 #设置序列化方式 agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder agent.sinks.k1.channel=c1 ``` 错误提示: ``` [ERROR - org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:240)] Failed to publish events org.apache.kafka.common.errors.InterruptException: Flush interrupted. at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:546) at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:224) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:57) at org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:425) at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:544) ... 4 more ``` 网上是真没有相应的答案,无奈了,给分求助
kafka和storm集成是报错,找不到jar包
我使用java对kafka和storm集成开发时报错,找不到jar包,可是在网上找了好久也没有找到报错所需要的jar包,请问这是怎么回事?我用的是最土的方法,把jar包全部引入项目里的。 kafka版本是2.9.2-0.8.2.1,storm版本是0.9.7,jdk版本是1.7。 错误日志如下: 9391 [refresh-active-timer] INFO backtype.storm.daemon.worker - All connections are ready for worker 5e95c764-cf8b-4ac9-9c89-911e34720c23:1024 with id b788e814-1915-4116-87ad-1514bc9a201b 9417 [Thread-15-__system] INFO backtype.storm.daemon.executor - Preparing bolt __system:(-1) 9428 [Thread-15-__system] INFO backtype.storm.daemon.executor - Prepared bolt __system:(-1) 9437 [Thread-17-__acker] INFO backtype.storm.daemon.executor - Preparing bolt __acker:(1) 9440 [Thread-17-__acker] INFO backtype.storm.daemon.executor - Prepared bolt __acker:(1) 9457 [Thread-11-kafkabolt] INFO backtype.storm.daemon.executor - Preparing bolt kafkabolt:(3) 9465 [Thread-9-bolt] INFO backtype.storm.daemon.executor - Preparing bolt bolt:(2) 9465 [Thread-9-bolt] INFO backtype.storm.daemon.executor - Prepared bolt bolt:(2) 9480 [Thread-13-spout] INFO backtype.storm.daemon.executor - Opening spout spout:(4) 9484 [Thread-13-spout] ERROR backtype.storm.util - Async loop died! java.lang.NoClassDefFoundError: com/netflix/curator/RetryPolicy at storm.kafka.KafkaSpout.open(KafkaSpout.java:68) ~[storm-kafka.jar:na] at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:529) ~[storm-core-0.9.7.jar:0.9.7] at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.7.jar:0.9.7] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Unknown Source) [na:1.7.0_17] Caused by: java.lang.ClassNotFoundException: com.netflix.curator.RetryPolicy at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_17] at java.net.URLClassLoader.findClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] ... 5 common frames omitted 9485 [Thread-13-spout] ERROR backtype.storm.daemon.executor - java.lang.NoClassDefFoundError: com/netflix/curator/RetryPolicy at storm.kafka.KafkaSpout.open(KafkaSpout.java:68) ~[storm-kafka.jar:na] at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:529) ~[storm-core-0.9.7.jar:0.9.7] at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.7.jar:0.9.7] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Unknown Source) [na:1.7.0_17] Caused by: java.lang.ClassNotFoundException: com.netflix.curator.RetryPolicy at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_17] at java.net.URLClassLoader.findClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] ... 5 common frames omitted 9725 [Thread-11-kafkabolt] INFO backtype.storm.daemon.executor - Prepared bolt kafkabolt:(3) 9885 [Thread-13-spout] ERROR backtype.storm.util - Halting process: ("Worker died") java.lang.RuntimeException: ("Worker died") at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:325) [storm-core-0.9.7.jar:0.9.7] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na] at backtype.storm.daemon.worker$fn__4694$fn__4695.invoke(worker.clj:495) [storm-core-0.9.7.jar:0.9.7] at backtype.storm.daemon.executor$mk_executor_data$fn__3272$fn__3273.invoke(executor.clj:241) [storm-core-0.9.7.jar:0.9.7] at backtype.storm.util$async_loop$fn__460.invoke(util.clj:473) [storm-core-0.9.7.jar:0.9.7] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Unknown Source) [na:1.7.0_17]
在logstash中,使用kafka作为输入源,内存溢出
log4j, [2017-09-11T09:39:02.399] ERROR: kafka.network.BoundedByteBufferReceive: OOME with size 20971590 java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:331) at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80) at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63) at kafka.network.Receive$class.readCompletely(Transmission.scala:56) at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29) at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110) at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60) 如何解决这个问题
kafka集群 报错 在线等
WARN [Controller-1-to-broker-2-send-thread], Controller 1's connection to broker Node(2, mine-28, 9092) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to Node(2, mine-28, 9092) failed at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:62) at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:58) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:106) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:129) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:225) at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:172) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
flume采集kafka报错怎么解决
报错信息: Source.java:120)] Event #: 0 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 965 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 975 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 985 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 995 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 1006 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [ERROR - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:153)] KafkaSource EXCEPTION, {} java.lang.NullPointerException at org.apache.flume.instrumentation.MonitoredCounterGroup.increment(MonitoredCounterGroup.java:261) at org.apache.flume.instrumentation.kafka.KafkaSourceCounter.incrementKafkaEmptyCount(KafkaSourceCounter.java:49) at org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:146) at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:139) at java.lang.Thread.run(Thread.java:748) -------------------------------------------- 配置文件 kafkaLogger.sources = kaSource kafkaLogger.channels = memoryChannel kafkaLogger.sinks = kaSink # The channel can be defined as follows. kafkaLogger.sources.kaSource.channels = memoryChannel kafkaLogger.sources.kaSource.type= org.apache.flume.source.kafka.KafkaSource kafkaLogger.sources.kaSource.zookeeperConnect=192.168.130.4:2181,192.168.130.5:2181,192.168.130.6:2181 kafkaLogger.sources.kaSource.topic=dwd-topic kafkaLogger.sources.kaSource.groupId = 0 kafkaLogger.channels.memoryChannel.type=memory kafkaLogger.channels.memoryChannel.capacity = 1000 kafkaLogger.channels.memoryChannel.keep-alive = 60 kafkaLogger.sinks.kaSink.type = elasticsearch kafkaLogger.sinks.kaSink.hostNames = 192.168.130.6:9300 kafkaLogger.sinks.kaSink.indexName = flume_mq_es_d kafkaLogger.sinks.kaSink.indexType = flume_mq_es kafkaLogger.sinks.kaSink.clusterName = zyuc-elasticsearch kafkaLogger.sinks.kaSink.batchSize = 100 kafkaLogger.sinks.kaSink.client = transport kafkaLogger.sinks.kaSink.serializer = com.commons.flume.sink.elasticsearch.CommonElasticSearchIndexRequestBuilderFactory kafkaLogger.sinks.kaSink.serializer.parse = com.commons.log.parser.LogTextParser kafkaLogger.sinks.kaSink.serializer.formatPattern = yyyyMMdd kafkaLogger.sinks.kaSink.serializer.dateFieldName = time kafkaLogger.sinks.kaSink.channel = memoryChannel
在Spring + mybatis 整合的时候报获取数据库连接错误
在Spring + mybatis 整合的时候报获取数据库连接错误,但是在项目启动的时候并没有错,只是在执行数据库插入的时候报错。 com.kom.base.canal.exception.ErrorHandleException: com.kom.base.canal.exception.ErrorHandleException: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at com.kom.base.canal.consumer.CanalEventDispatcherLocalImpl.dispatherEvent(CanalEventDispatcherLocalImpl.java:50) at com.kom.base.canal.consumer.CanalEventConsumerKafkaImpl$CanalEventConsumerListener.onMessage(CanalEventConsumerKafkaImpl.java:70) at com.kom.base.kafka.consumer.ConsumerOfKafkaImpl$1.run(ConsumerOfKafkaImpl.java:109) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.kom.base.canal.exception.ErrorHandleException: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at com.kom.base.canal.consumer.handler.BaseCanalEventHandler.handleEvent(BaseCanalEventHandler.java:58) at com.kom.base.canal.consumer.CanalEventDispatcherLocalImpl.dispatherEvent(CanalEventDispatcherLocalImpl.java:41) ... 7 more Caused by: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:75) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:371) at com.sun.proxy.$Proxy7.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:240) at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:51) at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:52) at com.sun.proxy.$Proxy8.insert(Unknown Source) at com.kom.giant.realtime.orderstatistics.dao.RtBuyerAreaAmountRankDao.insert(RtBuyerAreaAmountRankDao.java:15) at com.kom.giant.realtime.orderstatistics.service.OrderMainChangeHandler.rowUpdateHandle(OrderMainChangeHandler.java:46) at com.kom.base.canal.consumer.handler.BaseCanalEventHandler.handleEvent(BaseCanalEventHandler.java:44) ... 8 more Caused by: org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:26) at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:154) at org.apache.ibatis.session.defaults.DefaultSqlSession.insert(DefaultSqlSession.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:358) ... 16 more Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) at org.mybatis.spring.transaction.SpringManagedTransaction.openConnection(SpringManagedTransaction.java:81) at org.mybatis.spring.transaction.SpringManagedTransaction.getConnection(SpringManagedTransaction.java:67) at org.apache.ibatis.executor.BaseExecutor.getConnection(BaseExecutor.java:279) at org.apache.ibatis.executor.SimpleExecutor.prepareStatement(SimpleExecutor.java:72) at org.apache.ibatis.executor.SimpleExecutor.doUpdate(SimpleExecutor.java:47) at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:105) at org.apache.ibatis.executor.CachingExecutor.update(CachingExecutor.java:71) at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:152) ... 22 more Caused by: com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at com.alibaba.druid.pool.DruidDataSource.getConnectionInternal(DruidDataSource.java:1043) at com.alibaba.druid.pool.DruidDataSource.getConnectionDirect(DruidDataSource.java:941) at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4544) at com.alibaba.druid.filter.stat.StatFilter.dataSource_getConnection(StatFilter.java:661) at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4540) at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:919) at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:911) at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:98) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77)
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
【JSON解析】浅谈JSONObject的使用
简介 在程序开发过程中,在参数传递,函数返回值等方面,越来越多的使用JSON。JSON(JavaScript Object Notation)是一种轻量级的数据交换格式,同时也易于机器解析和生成、易于理解、阅读和撰写,而且Json采用完全独立于语言的文本格式,这使得Json成为理想的数据交换语言。 JSON建构于两种结构: “名称/值”对的集合(A Collection of name/va...
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
2019年还剩1天,我从外包公司离职了
这日子过的可真快啊,2019年还剩1天,外包公司干了不到3个月,我离职了
我一个37岁的程序员朋友
周末了,人一旦没有点事情干,心里就瞎想,而且跟几个老男人坐在一起,更容易瞎想,我自己现在也是 30 岁了,也是无时无刻在担心自己的职业生涯,担心丢掉工作没有收入,担心身体机能下降,担心突...
计算机网络的核心概念
这是《计算机网络》系列文章的第二篇文章 我们第一篇文章讲述了计算机网络的基本概念,互联网的基本名词,什么是协议以及几种接入网以及网络传输的物理媒体,那么本篇文章我们来探讨一下网络核心、交换网络、时延、丢包、吞吐量以及计算机网络的协议层次和网络攻击。 网络核心 网络的核心是由因特网端系统和链路构成的网状网络,下面这幅图正确的表达了这一点 那么在不同的 ISP 和本地以及家庭网络是如何交换信息的呢?...
python自动下载图片
近日闲来无事,总有一种无形的力量萦绕在朕身边,让朕精神涣散,昏昏欲睡。 可是,像朕这么有职业操守的社畜怎么能在上班期间睡瞌睡呢,我不禁陷入了沉思。。。。 突然旁边的IOS同事问:‘嘿,兄弟,我发现一个网站的图片很有意思啊,能不能帮我保存下来提升我的开发灵感?’ 作为一个坚强的社畜怎么能说自己不行呢,当时朕就不假思索的答应:‘oh, It’s simple. Wait for me for a ...
一名大专同学的四个问题
【前言】   收到一封来信,赶上各种事情拖了几日,利用今天要放下工作的时机,做个回复。   2020年到了,就以这一封信,作为开年标志吧。 【正文】   您好,我是一名现在有很多困惑的大二学生。有一些问题想要向您请教。   先说一下我的基本情况,高考失利,不想复读,来到广州一所大专读计算机应用技术专业。学校是偏艺术类的,计算机专业没有实验室更不用说工作室了。而且学校的学风也不好。但我很想在计算机领...
复习一周,京东+百度一面,不小心都拿了Offer
京东和百度一面都问了啥,面试官百般刁难,可惜我全会。
Java 14 都快来了,为什么还有这么多人固守Java 8?
从Java 9开始,Java版本的发布就让人眼花缭乱了。每隔6个月,都会冒出一个新版本出来,Java 10 , Java 11, Java 12, Java 13, 到2020年3月份,...
达摩院十大科技趋势发布:2020 非同小可!
【CSDN编者按】1月2日,阿里巴巴发布《达摩院2020十大科技趋势》,十大科技趋势分别是:人工智能从感知智能向认知智能演进;计算存储一体化突破AI算力瓶颈;工业互联网的超融合;机器间大规模协作成为可能;模块化降低芯片设计门槛;规模化生产级区块链应用将走入大众;量子计算进入攻坚期;新材料推动半导体器件革新;保护数据隐私的AI技术将加速落地;云成为IT技术创新的中心 。 新的画卷,正在徐徐展开。...
轻松搭建基于 SpringBoot + Vue 的 Web 商城应用
首先介绍下在本文出现的几个比较重要的概念: 函数计算(Function Compute): 函数计算是一个事件驱动的服务,通过函数计算,用户无需管理服务器等运行情况,只需编写代码并上传。函数计算准备计算资源,并以弹性伸缩的方式运行用户代码,而用户只需根据实际代码运行所消耗的资源进行付费。Fun: Fun 是一个用于支持 Serverless 应用部署的工具,能帮助您便捷地管理函数计算、API ...
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
Python+OpenCV实时图像处理
目录 1、导入库文件 2、设计GUI 3、调用摄像头 4、实时图像处理 4.1、阈值二值化 4.2、边缘检测 4.3、轮廓检测 4.4、高斯滤波 4.5、色彩转换 4.6、调节对比度 5、退出系统 初学OpenCV图像处理的小伙伴肯定对什么高斯函数、滤波处理、阈值二值化等特性非常头疼,这里给各位分享一个小项目,可通过摄像头实时动态查看各类图像处理的特点,也可对各位调参、测试...
2020年一线城市程序员工资大调查
人才需求 一线城市共发布岗位38115个,招聘120827人。 其中 beijing 22805 guangzhou 25081 shanghai 39614 shenzhen 33327 工资分布 2020年中国一线城市程序员的平均工资为16285元,工资中位数为14583元,其中95%的人的工资位于5000到20000元之间。 和往年数据比较: yea...
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
害怕面试被问HashMap?这一篇就搞定了!
声明:本文以jdk1.8为主! 搞定HashMap 作为一个Java从业者,面试的时候肯定会被问到过HashMap,因为对于HashMap来说,可以说是Java集合中的精髓了,如果你觉得自己对它掌握的还不够好,我想今天这篇文章会非常适合你,至少,看了今天这篇文章,以后不怕面试被问HashMap了 其实在我学习HashMap的过程中,我个人觉得HashMap还是挺复杂的,如果真的想把它搞得明明白...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
程序员如何通过造轮子走向人生巅峰?
前言:你所做的事情,也许暂时看不到成果。但不要灰心,你不是没有成长,而是在扎根。 程序员圈经常流行的一句话:“不要重复造轮子”。在计算机领域,我们将封装好的组件、库,叫做轮子。因为它可以拿来直接用,直接塞进我们的项目中,就能实现对应的功能。 有些同学会问,人家都已经做好了,你再来重新弄一遍,有什么意义?这不是在浪费时间吗。 殊不知,造轮子是一种学习方式,能快速进步,造得好,是自己超强能力的表...
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试一个ArrayList我都能跟面试官扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
Idea 中最常用的10款插件(提高开发效率),一定要学会使用!
学习使用一些插件,可以提高开发效率。对于我们开发人员很有帮助。这篇博客介绍了开发中使用的插件。
立即提问

相似问题