kafka启动报错,java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V

试过很多方法,降级zk使其和kafka依赖的版本保持一致;
zk3.414 ,kafka2.3
删除了scala的环境变量,依然不行;
java_home只有一个
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V
at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:238)
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$2(ZooKeeperClient.scala:160)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1(ZooKeeperClient.scala:160)
at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1$adapted(ZooKeeperClient.scala:156)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:156)
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1660)
at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1647)
at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1642)
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1712)
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689)
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97)
at kafka.server.KafkaServer.startup(KafkaServer.scala:262)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
flume开启报错java.lang.SecurityException: sealing violation: package org.apache.flume.conf is sealed
开启了hadoop,zk,kafka之后,配置了conf,flume消费kafka中产生的信息 开启flume的命令: flume-ng agent --conf /home/hduser/apps/flume/conf --conf-file /home/hduser/apps/flume/conf/applog --name a1 -Dflume.root.logger=INFO,console 第一次开启 报错信息: ``` 2019-04-27 13:20:32,643 (main) [ERROR - org.apache.flume.node.Application.main(Application.java:374)] A fatal error occurred while running. Exception follows. java.lang.SecurityException: sealing violation: package org.apache.flume.conf is sealed at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:399) at java.net.URLClassLoader.definePackageInternal(URLClassLoader.java:419) at java.net.URLClassLoader.defineClass(URLClassLoader.java:451) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.flume.node.Application.main(Application.java:350) ``` 上网查了老半天,实在没办法了,有大佬知道怎么办吗? 这问题就是说package org.apache.flume.conf is sealed被封装吗? 多次开启后的错误显示: ``` (main) [ERROR - org.apache.flume.node.Application.main(Application.java:374)] A fatal error occurred while running. Exception follows. java.lang.SecurityException: sealing violation: can't seal package org.apache.flume.conf: already loaded at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:406) at java.net.URLClassLoader.definePackageInternal(URLClassLoader.java:419) at java.net.URLClassLoader.defineClass(URLClassLoader.java:451) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.flume.node.Application.main(Application.java:350) ``` 说那个已经被加载了 更蒙 了 求解决!
kafka和spring集成问题:Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties
springboot 和kafka集成提示Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties错误 详细如下: Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:507) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:367) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.buildLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:208) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.findLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:189) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(InitDestroyAnnotationBeanPostProcessor.java:128) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(CommonAnnotationBeanPostProcessor.java:297) ~[spring-context-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:1013) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:547) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 15 common frames omitted Caused by: java.lang.NoClassDefFoundError: org/springframework/kafka/listener/config/ContainerProperties at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_131] at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_131] at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_131] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:489) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 22 common frames omitted Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_131] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_131] ... 26 common frames omitted ``` ```
kafka集成storm出现异常
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) at backtype.storm.daemon.executor$fn__6579$fn__6594$fn__6623.invoke(executor.clj:565) at backtype.storm.util$async_loop$fn__459.invoke(util.clj:463) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:81) at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:79) ... 6 more Caused by: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.DynamicBrokersReader.getLeaderFor(DynamicBrokersReader.java:120) at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:68) ... 7 more
storm kafka整合报错,请大神帮忙看看啥情况
70308 [Thread-29-spout-read-kafka] INFO s.k.ZkCoordinator - Task [1/1] New partition managers: [Partition{host=pamshost02:9092, partition=9}, Partition{host=pamshost02:9092, partition=8}, Partition{host=pamshost02:9092, partition=7}, Partition{host=pamshost02:9092, partition=6}, Partition{host=pamshost02:9092, partition=5}, Partition{host=pamshost02:9092, partition=4}, Partition{host=pamshost02:9092, partition=3}, Partition{host=pamshost02:9092, partition=0}, Partition{host=pamshost02:9092, partition=1}, Partition{host=pamshost02:9092, partition=2}] 70599 [Thread-29-spout-read-kafka] INFO s.k.PartitionManager - Read partition information from: /detect/readKafka/partition_9 --> null 91326 [Thread-29-spout-read-kafka] INFO k.c.SimpleConsumer - Reconnect due to socket error: java.nio.channels.ClosedChannelException 91327 [Thread-29-spout-read-kafka] ERROR b.s.util - Async loop died! java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91329 [Thread-29-spout-read-kafka] ERROR b.s.d.executor - java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91535 [Thread-29-spout-read-kafka] ERROR b.s.util - Halting process: ("Worker died") java.lang.RuntimeException: ("Worker died") at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:336) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.6.0.jar:?] at backtype.storm.daemon.worker$fn__7184$fn__7185.invoke(worker.clj:532) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$mk_executor_data$fn__5523$fn__5524.invoke(executor.clj:261) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:489) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75]
使用kafka connect ,mysql作为输入,输出也是mysql 报错record value schema is missing
使用kafka connect ,mysql作为输入,输出也是mysql 报错record value schema is missing 报错代码如下: [2019-10-31 14:37:32,956] ERROR WorkerSinkTask{id=mysql-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask:544) org.apache.kafka.connect.errors.ConnectException: PK mode for table 'dim_channel_copy' is RECORD_VALUE, but record value schema is missing     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extractRecordValuePk(FieldsMetadata.java:238)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:102)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)     at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:71)     at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)     at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)     at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:524)     at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:302)     at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)     at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)     at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)     at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)     at java.util.concurrent.FutureTask.run(FutureTask.java:266)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)     at java.lang.Thread.run(Thread.java:748) [2019-10-31 14:37:32,957] ERROR WorkerSinkTask{id=mysql-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172) org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.     at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)     at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:302)     at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)     at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)     at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)     at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)     at java.util.concurrent.FutureTask.run(FutureTask.java:266)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)     at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.ConnectException: PK mode for table 'dim_channel_copy' is RECORD_VALUE, but record value schema is missing     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extractRecordValuePk(FieldsMetadata.java:238)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:102)     at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)     at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:71)     at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)     at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)     at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:524)     ... 10 more [2019-10-31 14:37:32,958] ERROR WorkerSinkTask{id=mysql-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:173) [2019-10-31 14:37:32,958] INFO Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask:100)
kafka.common.KafkaException:
package com; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import kafka.serializer.StringEncoder; public class kafkaProducer extends Thread{ private String topic; public kafkaProducer(String topic){ super(); this.topic = topic; } @Override public void run() { Producer producer = createProducer(); int i=0; while(true){ producer.send(new KeyedMessage<Integer, String>(topic, "message: " + i++)); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } } private Producer createProducer() { Properties properties = new Properties(); properties.put("zookeeper.connect", "localhost:2181");//声明zk properties.put("serializer.class", StringEncoder.class.getName()); properties.put("metadata.broker.list", "localhost:9092");// 声明kafka broker return new Producer<Integer, String>(new ProducerConfig(properties)); } public static void main(String[] args) { new kafkaProducer("test").start();// 使用kafka集群中创建好的主题 test } } kafka.common.KafkaException: fetching topic metadata for topics [Set(test)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72) at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67) at kafka.utils.Utils$.swallow(Utils.scala:172) at kafka.utils.Logging$class.swallowError(Logging.scala:106) at kafka.utils.Utils$.swallowError(Utils.scala:45) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.kafkaProducer.run(kafkaProducer.java:29) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) at kafka.producer.SyncProducer.send(SyncProducer.scala:113) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) ... 9 more ``` ```
这个问题怎么解决,docker搭建kafka的wen'ti
首先说明这个错误的前提,我没有自己在虚拟机上搭建,因为华为送了服务器,我就直接在它的服务器上搭建了docker,弄了三个容器装了kafka,直接使用docker-compose搭建集群  映射的端口就是这样子,但是呢,在IDEA连接kafka集群的时候 首先连接IP:5000,5002,5004 再连接返回的host.name =kafka1,kafka2,kafka3 最后继续连接advertised.host.name=kafka1,kafka2,kafka3 这样的情况,如果是普通服务器还好,直接在本地hosts添加主机IP映射即可 但是这个容器就添加不了了,容器的IP地址是内网设定的,我们本地访问ip肯定访问不到了。 20/01/16 22:11:04 INFO AppInfoParser: Kafka version: 2.4.0 20/01/16 22:11:04 INFO AppInfoParser: Kafka commitId: 77a89fcf8d7fa018 20/01/16 22:11:04 INFO AppInfoParser: Kafka startTimeMs: 1579183864167 20/01/16 22:11:04 INFO KafkaConsumer: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Subscribed to topic(s): test, topicongbo 20/01/16 22:11:04 INFO Metadata: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Cluster ID: Kkwgy0gkSkmGAlsC_5cz9A 20/01/16 22:11:04 INFO AbstractCoordinator: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Discovered group coordinator kafka3:9092 (id: 2147483644 rack: null) 20/01/16 22:11:06 WARN NetworkClient: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Error connecting to node kafka3:9092 (id: 2147483644 rack: null) java.net.UnknownHostException: kafka3 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363) at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:572) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:757) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:737) at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204) at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:599) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:409) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:294) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.paranoidPoll(DirectKafkaInputDStream.scala:172) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:260) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7(DStreamGraph.scala:54) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7$adapted(DStreamGraph.scala:54) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:145) at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:974) at scala.collection.parallel.Task.$anonfun$tryLeaf$1(Tasks.scala:53) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at scala.util.control.Breaks$$anon$1.catchBreak(Breaks.scala:67) at scala.collection.parallel.Task.tryLeaf(Tasks.scala:56) at scala.collection.parallel.Task.tryLeaf$(Tasks.scala:50) at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:971) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:153) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 那么这个错误怎么解决的呢,而且华为的安全组我没有权限修改,只能5000-5010的端口对外开方
kafka ArrayIndexOutOfBoundsException: 18
在kafka上出了下面这个问题,上网查了下都说是新版的kafka clien向旧版的kafka发送请求,旧版的kafka(<0.10)不支持ApiVersion(key:18) Request,造成的,但是我所有的produce,consumer,kafka服务器上装的kafka clien都是0.9.0.1,应该不会出现这个问题才对,为什么?求各位大神指点 ``` [2018-10-25 10:03:17,919] INFO [Kafka Server 0], started (kafka.server.KafkaServer) [2018-10-25 10:03:18,080] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [topic-test,0] (kafka.server.ReplicaFetcherManager) [2018-10-25 10:03:18,099] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [topic-test,0] (kafka.server.ReplicaFetcherManager) [2018-10-25 10:03:48,864] ERROR Processor got uncaught exception. (kafka.network.Processor) java.lang.ArrayIndexOutOfBoundsException: 18 at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68) at org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39) at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:79) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421) at scala.collection.Iterator$class.foreach(Iterator.scala:742) at scala.collection.AbstractIterator.foreach(Iterator.scala:1194) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.network.Processor.run(SocketServer.scala:421) at java.lang.Thread.run(Thread.java:748) ```
windows环境下kafka创建topic后重启就报错
[2017-12-20 17:20:15,475] WARN Found a corrupted index file due to requirement failed: Corrupt index found, index file (D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index) has non-zero size but the last offset is 0 which is no larger than the base offset 0.}. deleting D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex, D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index, and D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.txnindex and rebuilding index... (kafka.log.Log) [2017-12-20 17:20:15,475] ERROR Error while loading log dir D:\program\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager) java.nio.file.FileSystemException: D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex: 另一 个程序正在使用此文件,进程无法访问。 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269) at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108) at java.nio.file.Files.deleteIfExists(Files.java:1165) at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:335) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:191) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788) at kafka.log.Log.loadSegmentFiles(Log.scala:297) at kafka.log.Log.loadSegments(Log.scala:406) at kafka.log.Log.<init>(Log.scala:203) at kafka.log.Log$.apply(Log.scala:1735) at kafka.log.LogManager.loadLog(LogManager.scala:231) at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:292) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
SparkStreming向kafka写数据,报 Error in I/O......
我按照kafka官网部署了单节点kafka0.8.2.1,部署命令依次如下: //启动自带的zookeeper bin/zookeeper-server-start.sh config/zookeeper.properties & //启动kafka服务 bin/kafka-server-start.sh config/server.properties & //创建topic bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test & 这是如果按官网的命令,启动一个生产者,写入数据,然后启动一个消费者,可以正常消费数据。 接下来我在spark1.5.2(通过官网文档说对应的kafka是0.8.2.1),所以我用的所有jar包都是0.8.2.1(至少kafka方面的都是0.8.2.1),代码如下 val sparkConf = new SparkConf().setAppName("kafka") val ssc = new StreamingContext(sparkConf, Seconds(10)) val properties = new Properties() properties.put("bootstrap.servers", "100.173.249.68:2181") properties.put("metadata.broker.list", "100.173.249.68:9092") properties.put("group.id", "test-consumer-group") properties.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer") properties.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer") val producer = new KafkaProducer[String, String](properties) val record = new ProducerRecord[String, String]("test", "push data") producer.send(record) ssc.start() ssc.awaitTermination() 将该代码达成jar包到服务器上运行报错,kafka端报错如下: INFO Accepted socket connection from /10.173.249.68:58489 (org.apache.zookeeper.server.NIOServerCnxnFactory) WARN Exception causing close of session 0x0 due to java.io.EOFException (org.apache.zookeeper.server.NIOServerCnxn) INFO Closed socket connection for client /10.173.249.68:58489 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn) 这些报错,不停的刷 同时spark处报错也不停的刷,如下, WARN network.Selector: Error in I/O with chenxm/10.173.249.68 java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) at org.apache.kafka.common.network.Selector.poll(Selector.java:248) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) at java.lang.Thread.run(Thread.java:745) 不知道什么原因,拜托,多谢
kafka执行报错报错,大神帮忙
Exception in thread "main" org.apache.spark.SparkException: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException org.apache.spark.SparkException: Couldn't find leader offsets for Set([ts_bg0_type0,0], [ts_bg0_type0,1], [ts_bg0_type0,2]) at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385) at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385) at scala.util.Either.fold(Either.scala:98) at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:384) at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222) at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484) at cn.test.Kafka_Streaming_cv$.main(Kafka_Streaming_cv.scala:49) at cn.test.Kafka_Streaming_cv.main(Kafka_Streaming_cv.scala) ``` ```
运行flume的agent,出现如下错误
我的代码: ``` agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=spooldir agent.sources.s1.spoolDir=/tmp/logs/tomcat2kafka agent.sources.s1.channels=c1 agent.channels.c1.type=memory agent.channels.c1.capacity=10000 agent.channels.c1.transactionCapacity=100 #设置Kafka接收 agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink #设置Kafka的broker地址和端口号 agent.sinks.k1.brokerList=222.30.194.254:9092 #设置Kafka的Topic agent.sinks.k1.topic=kafkatest2 #设置序列化方式 agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder agent.sinks.k1.channel=c1 ``` 错误提示: ``` [ERROR - org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:240)] Failed to publish events org.apache.kafka.common.errors.InterruptException: Flush interrupted. at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:546) at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:224) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:57) at org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:425) at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:544) ... 4 more ``` 网上是真没有相应的答案,无奈了,给分求助
kafka和storm集成是报错,找不到jar包
我使用java对kafka和storm集成开发时报错,找不到jar包,可是在网上找了好久也没有找到报错所需要的jar包,请问这是怎么回事?我用的是最土的方法,把jar包全部引入项目里的。 kafka版本是2.9.2-0.8.2.1,storm版本是0.9.7,jdk版本是1.7。 错误日志如下: 9391 [refresh-active-timer] INFO backtype.storm.daemon.worker - All connections are ready for worker 5e95c764-cf8b-4ac9-9c89-911e34720c23:1024 with id b788e814-1915-4116-87ad-1514bc9a201b 9417 [Thread-15-__system] INFO backtype.storm.daemon.executor - Preparing bolt __system:(-1) 9428 [Thread-15-__system] INFO backtype.storm.daemon.executor - Prepared bolt __system:(-1) 9437 [Thread-17-__acker] INFO backtype.storm.daemon.executor - Preparing bolt __acker:(1) 9440 [Thread-17-__acker] INFO backtype.storm.daemon.executor - Prepared bolt __acker:(1) 9457 [Thread-11-kafkabolt] INFO backtype.storm.daemon.executor - Preparing bolt kafkabolt:(3) 9465 [Thread-9-bolt] INFO backtype.storm.daemon.executor - Preparing bolt bolt:(2) 9465 [Thread-9-bolt] INFO backtype.storm.daemon.executor - Prepared bolt bolt:(2) 9480 [Thread-13-spout] INFO backtype.storm.daemon.executor - Opening spout spout:(4) 9484 [Thread-13-spout] ERROR backtype.storm.util - Async loop died! java.lang.NoClassDefFoundError: com/netflix/curator/RetryPolicy at storm.kafka.KafkaSpout.open(KafkaSpout.java:68) ~[storm-kafka.jar:na] at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:529) ~[storm-core-0.9.7.jar:0.9.7] at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.7.jar:0.9.7] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Unknown Source) [na:1.7.0_17] Caused by: java.lang.ClassNotFoundException: com.netflix.curator.RetryPolicy at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_17] at java.net.URLClassLoader.findClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] ... 5 common frames omitted 9485 [Thread-13-spout] ERROR backtype.storm.daemon.executor - java.lang.NoClassDefFoundError: com/netflix/curator/RetryPolicy at storm.kafka.KafkaSpout.open(KafkaSpout.java:68) ~[storm-kafka.jar:na] at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:529) ~[storm-core-0.9.7.jar:0.9.7] at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.7.jar:0.9.7] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Unknown Source) [na:1.7.0_17] Caused by: java.lang.ClassNotFoundException: com.netflix.curator.RetryPolicy at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.net.URLClassLoader$1.run(Unknown Source) ~[na:1.7.0_17] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_17] at java.net.URLClassLoader.findClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] at java.lang.ClassLoader.loadClass(Unknown Source) ~[na:1.7.0_17] ... 5 common frames omitted 9725 [Thread-11-kafkabolt] INFO backtype.storm.daemon.executor - Prepared bolt kafkabolt:(3) 9885 [Thread-13-spout] ERROR backtype.storm.util - Halting process: ("Worker died") java.lang.RuntimeException: ("Worker died") at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:325) [storm-core-0.9.7.jar:0.9.7] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na] at backtype.storm.daemon.worker$fn__4694$fn__4695.invoke(worker.clj:495) [storm-core-0.9.7.jar:0.9.7] at backtype.storm.daemon.executor$mk_executor_data$fn__3272$fn__3273.invoke(executor.clj:241) [storm-core-0.9.7.jar:0.9.7] at backtype.storm.util$async_loop$fn__460.invoke(util.clj:473) [storm-core-0.9.7.jar:0.9.7] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Unknown Source) [na:1.7.0_17]
在logstash中,使用kafka作为输入源,内存溢出
log4j, [2017-09-11T09:39:02.399] ERROR: kafka.network.BoundedByteBufferReceive: OOME with size 20971590 java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:331) at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80) at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63) at kafka.network.Receive$class.readCompletely(Transmission.scala:56) at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29) at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111) at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110) at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60) 如何解决这个问题
kafka集群 报错 在线等
WARN [Controller-1-to-broker-2-send-thread], Controller 1's connection to broker Node(2, mine-28, 9092) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to Node(2, mine-28, 9092) failed at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:62) at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:58) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:106) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:129) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:225) at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:172) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
flume采集kafka报错怎么解决
报错信息: Source.java:120)] Event #: 0 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 965 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 975 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 985 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 995 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 1006 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [ERROR - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:153)] KafkaSource EXCEPTION, {} java.lang.NullPointerException at org.apache.flume.instrumentation.MonitoredCounterGroup.increment(MonitoredCounterGroup.java:261) at org.apache.flume.instrumentation.kafka.KafkaSourceCounter.incrementKafkaEmptyCount(KafkaSourceCounter.java:49) at org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:146) at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:139) at java.lang.Thread.run(Thread.java:748) -------------------------------------------- 配置文件 kafkaLogger.sources = kaSource kafkaLogger.channels = memoryChannel kafkaLogger.sinks = kaSink # The channel can be defined as follows. kafkaLogger.sources.kaSource.channels = memoryChannel kafkaLogger.sources.kaSource.type= org.apache.flume.source.kafka.KafkaSource kafkaLogger.sources.kaSource.zookeeperConnect=192.168.130.4:2181,192.168.130.5:2181,192.168.130.6:2181 kafkaLogger.sources.kaSource.topic=dwd-topic kafkaLogger.sources.kaSource.groupId = 0 kafkaLogger.channels.memoryChannel.type=memory kafkaLogger.channels.memoryChannel.capacity = 1000 kafkaLogger.channels.memoryChannel.keep-alive = 60 kafkaLogger.sinks.kaSink.type = elasticsearch kafkaLogger.sinks.kaSink.hostNames = 192.168.130.6:9300 kafkaLogger.sinks.kaSink.indexName = flume_mq_es_d kafkaLogger.sinks.kaSink.indexType = flume_mq_es kafkaLogger.sinks.kaSink.clusterName = zyuc-elasticsearch kafkaLogger.sinks.kaSink.batchSize = 100 kafkaLogger.sinks.kaSink.client = transport kafkaLogger.sinks.kaSink.serializer = com.commons.flume.sink.elasticsearch.CommonElasticSearchIndexRequestBuilderFactory kafkaLogger.sinks.kaSink.serializer.parse = com.commons.log.parser.LogTextParser kafkaLogger.sinks.kaSink.serializer.formatPattern = yyyyMMdd kafkaLogger.sinks.kaSink.serializer.dateFieldName = time kafkaLogger.sinks.kaSink.channel = memoryChannel
flume使用lzo压缩问题
目前使用flume抽取日志数据使用flume拦截器将日志数据发送到不同的 kafka的topic中,然后使用flume将kafka的topic中的数据使用LZO压缩 发送到hdfs中,在lzo压缩这里flume出现了问题,报错信息如下: ``` 2020-01-30 19:38:12,842 (conf-file-poller-0) [WARN - org.apache.hadoop.util.NativeCodeLoader.<clinit>(NativeCodeLoader.java:62)] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2020-01-30 19:38:13,294 (conf-file-poller-0) [ERROR - org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:426)] Sink k1 has been removed due to an error during configuration java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found. at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:139) at org.apache.flume.sink.hdfs.HDFSEventSink.getCodec(HDFSEventSink.java:313) at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:237) at org.apache.flume.conf.Configurables.configure(Configurables.java:41) at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:411) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:141) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:132) ... 13 more 2020-01-30 19:38:13,297 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: k2, type: hdfs 2020-01-30 19:38:13,356 (conf-file-poller-0) [ERROR - org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:426)] Sink k2 has been removed due to an error during configuration java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found. at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:139) at org.apache.flume.sink.hdfs.HDFSEventSink.getCodec(HDFSEventSink.java:313) at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:237) at org.apache.flume.conf.Configurables.configure(Configurables.java:41) at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:411) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:141) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found ``` kafka配置文件如下: ``` ## 组件 a1.sources=r1 r2 a1.channels=c1 c2 a1.sinks=k1 k2 ## source1 a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource a1.sources.r1.batchSize = 5000 a1.sources.r1.batchDurationMillis = 2000 a1.sources.r1.kafka.bootstrap.servers = bigdata01:9092,bigdata02:9092,bigdata03:9092 a1.sources.r1.kafka.topics=topic_start ## source2 a1.sources.r2.type = org.apache.flume.source.kafka.KafkaSource a1.sources.r2.batchSize = 5000 a1.sources.r2.batchDurationMillis = 2000 a1.sources.r2.kafka.bootstrap.servers = bigdata01:9092,bigdata02:9092,bigdata03:9092 a1.sources.r2.kafka.topics=topic_event ## channel1 a1.channels.c1.type = file a1.channels.c1.checkpointDir = /opt/modules/apache-flume-1.7.0-bin/checkpoint/behavior1 a1.channels.c1.dataDirs = /opt/modules/apache-flume-1.7.0-bin/data/behavior1/ a1.channels.c1.maxFileSize = 2146435071 a1.channels.c1.capacity = 1000000 a1.channels.c1.keep-alive = 6 ## channel2 a1.channels.c2.type = file a1.channels.c2.checkpointDir = /opt/modules/apache-flume-1.7.0-bin/checkpoint/behavior2 a1.channels.c2.dataDirs = /opt/modules/apache-flume-1.7.0-bin/data/behavior2/ a1.channels.c2.maxFileSize = 2146435071 a1.channels.c2.capacity = 1000000 a1.channels.c2.keep-alive = 6 ## sink1 a1.sinks.k1.type = hdfs a1.sinks.k1.hdfs.path = hdfs://bigdata01:8020/origin_data/gmall/log/topic_start/%Y-%m-%d a1.sinks.k1.hdfs.filePrefix = logstart- a1.sinks.k1.hdfs.round = true a1.sinks.k1.hdfs.roundValue = 10 a1.sinks.k1.hdfs.roundUnit = second ##sink2 a1.sinks.k2.type = hdfs a1.sinks.k2.hdfs.path = hdfs://bigdata01:8020/origin_data/gmall/log/topic_event/%Y-%m-%d a1.sinks.k2.hdfs.filePrefix = logevent- a1.sinks.k2.hdfs.round = true a1.sinks.k2.hdfs.roundValue = 10 a1.sinks.k2.hdfs.roundUnit = second ## 不要产生大量小文件 a1.sinks.k1.hdfs.rollInterval = 10 a1.sinks.k1.hdfs.rollSize = 134217728 a1.sinks.k1.hdfs.rollCount = 0 a1.sinks.k2.hdfs.rollInterval = 10 a1.sinks.k2.hdfs.rollSize = 134217728 a1.sinks.k2.hdfs.rollCount = 0 ## 控制输出文件是原生文件。 a1.sinks.k1.hdfs.fileType = CompressedStream a1.sinks.k2.hdfs.fileType = CompressedStream a1.sinks.k1.hdfs.codeC = lzop a1.sinks.k2.hdfs.codeC = lzop ## 拼装 a1.sources.r1.channels = c1 a1.sinks.k1.channel= c1 a1.sources.r2.channels = c2 a1.sinks.k2.channel= c2 ``` hadoop中的core-site.xml配置文件如下: ``` <property> <name>io.compression.codecs</name> <value> org.apache.hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.DefaultCodec, org.apache.hadoop.io.compress.BZip2Codec, org.apache.hadoop.io.compress.SnappyCodec, com.hadoop.compression.lzo.LzoCodec, com.hadoop.compression.lzo.LzopCodec </value> </property> <property> <name>io.compression.codec.lzo.class</name> <value>com.hadoop.compression.lzo.LzoCodec</value> </property> ``` lzo的包已经放入到hadoop对应的目录下: ``` /opt/modules/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar ``` 不知道是不是环境变量问题,急急急,在线等。。。。。。。
在Spring + mybatis 整合的时候报获取数据库连接错误
在Spring + mybatis 整合的时候报获取数据库连接错误,但是在项目启动的时候并没有错,只是在执行数据库插入的时候报错。 com.kom.base.canal.exception.ErrorHandleException: com.kom.base.canal.exception.ErrorHandleException: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at com.kom.base.canal.consumer.CanalEventDispatcherLocalImpl.dispatherEvent(CanalEventDispatcherLocalImpl.java:50) at com.kom.base.canal.consumer.CanalEventConsumerKafkaImpl$CanalEventConsumerListener.onMessage(CanalEventConsumerKafkaImpl.java:70) at com.kom.base.kafka.consumer.ConsumerOfKafkaImpl$1.run(ConsumerOfKafkaImpl.java:109) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.kom.base.canal.exception.ErrorHandleException: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at com.kom.base.canal.consumer.handler.BaseCanalEventHandler.handleEvent(BaseCanalEventHandler.java:58) at com.kom.base.canal.consumer.CanalEventDispatcherLocalImpl.dispatherEvent(CanalEventDispatcherLocalImpl.java:41) ... 7 more Caused by: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:75) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:371) at com.sun.proxy.$Proxy7.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:240) at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:51) at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:52) at com.sun.proxy.$Proxy8.insert(Unknown Source) at com.kom.giant.realtime.orderstatistics.dao.RtBuyerAreaAmountRankDao.insert(RtBuyerAreaAmountRankDao.java:15) at com.kom.giant.realtime.orderstatistics.service.OrderMainChangeHandler.rowUpdateHandle(OrderMainChangeHandler.java:46) at com.kom.base.canal.consumer.handler.BaseCanalEventHandler.handleEvent(BaseCanalEventHandler.java:44) ... 8 more Caused by: org.apache.ibatis.exceptions.PersistenceException: ### Error updating database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 ### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:26) at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:154) at org.apache.ibatis.session.defaults.DefaultSqlSession.insert(DefaultSqlSession.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:358) ... 16 more Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) at org.mybatis.spring.transaction.SpringManagedTransaction.openConnection(SpringManagedTransaction.java:81) at org.mybatis.spring.transaction.SpringManagedTransaction.getConnection(SpringManagedTransaction.java:67) at org.apache.ibatis.executor.BaseExecutor.getConnection(BaseExecutor.java:279) at org.apache.ibatis.executor.SimpleExecutor.prepareStatement(SimpleExecutor.java:72) at org.apache.ibatis.executor.SimpleExecutor.doUpdate(SimpleExecutor.java:47) at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:105) at org.apache.ibatis.executor.CachingExecutor.update(CachingExecutor.java:71) at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:152) ... 22 more Caused by: com.alibaba.druid.pool.DataSourceClosedException: dataSource already closed at Sat Oct 28 10:49:14 GMT+08:00 2017 at com.alibaba.druid.pool.DruidDataSource.getConnectionInternal(DruidDataSource.java:1043) at com.alibaba.druid.pool.DruidDataSource.getConnectionDirect(DruidDataSource.java:941) at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4544) at com.alibaba.druid.filter.stat.StatFilter.dataSource_getConnection(StatFilter.java:661) at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4540) at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:919) at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:911) at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:98) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111) at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77)
kafka自定义producer链接topic错误
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries. [2017-10-25 22:02:15,757] ERROR Failed to send requests for topics xutongtp with correlation ids in [0,12] (kafka.producer.async.DefaultEventHandler:99) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:98) at kafka.producer.Producer.send(Producer.scala:78) at kafka.javaapi.producer.Producer.send(Producer.scala:35) at kafka.transwarp.io.KafkaProducer.produce(KafkaProducer.java:34) at kafka.transwarp.io.KafkaMain.main(KafkaMain.java:7)
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了无数
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
MyBatis研习录(01)——MyBatis概述与入门
C语言自学完备手册(33篇) Android多分辨率适配框架 JavaWeb核心技术系列教程 HTML5前端开发实战系列教程 MySQL数据库实操教程(35篇图文版) 推翻自己和过往——自定义View系列教程(10篇) 走出思维困境,踏上精进之路——Android开发进阶精华录 讲给Android程序员看的前端系列教程(40集免费视频教程+源码) 版权声明 本文原创作者:谷哥的小弟 作者博客
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
Python绘图,圣诞树,花,爱心 | Turtle篇
1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle() circle.shape('circle') circle.color('red') circle.speed('fastest') circle.up() square = turtle.Turtle()
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
web前端javascript+jquery知识点总结
Javascript javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ,语法同java类似,是一种解释性语言,边执行边解释。 JavaScript的组成: ECMAScipt 用于描述: 语法,变量和数据类型,运算符,逻辑控制语句,关键字保留字,对象。 浏览器对象模型(Br
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
文章目录1. 前言2. 数据下载3. 数据处理4. 数据可视化 1. 前言 今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。 2月6日追记:本文发布后,腾讯的数据源多次变更u
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
渗透测试-灰鸽子远控木马
木马概述 灰鸽子( Huigezi),原本该软件适用于公司和家庭管理,其功能十分强大,不但能监视摄像头、键盘记录、监控桌面、文件操作等。还提供了黑客专用功能,如:伪装系统图标、随意更换启动项名称和表述、随意更换端口、运行后自删除、毫无提示安装等,并采用反弹链接这种缺陷设计,使得使用者拥有最高权限,一经破解即无法控制。最终导致被黑客恶意使用。原作者的灰鸽子被定义为是一款集多种控制方式于一体的木马程序
Python:爬取疫情每日数据
前言 有部分同学留言说为什么412,这是因为我代码里全国的cookies需要你自己打开浏览器更新好后替换,而且这个cookies大概只能持续20秒左右! 另外全国卫健委的数据格式一直在变,也有可能会导致爬取失败! 我现在已根据2月14日最新通报稿的格式修正了! 目前每天各大平台,如腾讯、今日头条都会更新疫情每日数据,他们的数据源都是一样的,主要都是通过各地的卫健委官网通报。 为什么已经有大量平台做
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告(本文) 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧...... 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升
粒子群算法求解物流配送路线问题(python)
粒子群算法求解物流配送路线问题(python) 1.查找论文文献 找一篇物流配送路径优化+粒子群算法求解的论文 参考文献:基于混沌粒子群算法的物流配送路径优化 2.了解粒子群算法的原理 讲解通俗易懂,有数学实例的博文:https://blog.csdn.net/daaikuaichuan/article/details/81382794 3.确定编码方式和解码策略 3.1编码方式 物流配送路线的
教你如何编写第一个简单的爬虫
很多人知道爬虫,也很想利用爬虫去爬取自己想要的数据,那么爬虫到底怎么用呢?今天就教大家编写一个简单的爬虫。 下面以爬取笔者的个人博客网站为例获取第一篇文章的标题名称,教大家学会一个简单的爬虫。 第一步:获取页面 #!/usr/bin/python # coding: utf-8 import requests #引入包requests link = "http://www.santostang.
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c#怎么获得线程名 c# usb 采集器 c# csv 读取 c# sort() c# 关闭io流 c# 响应函数 插入 c#面对对象的三大特性 c# 打印 等比缩放 c#弹出右键菜单 c#1如何搞成01
立即提问

相似问题