spring boot 整合kafka报错怎么解决? 5C

整合了一下spring boot跟kafka老是一直报错

 2018-05-20 18:56:26.920 ERROR 4428 --- [           main] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='myTest--------1' to topic myTest:

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

请问一下各位大佬怎么解决的?

相应代码如下,也是参考了网上的:

 <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.9.RELEASE</version>
<dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.10.2.0</version>
        </dependency>

kafka版本及配置
kafka_2.11-1.1.0
配置:

 port = 9092
host.name = 阿里内网ip
advertised.host.name = xxx.xxx.xx 阿里外网ip

其他配置默认了
然后spring boot配置

 kafka: 
   bootstrap-servers: xxx.xxx.xxx.xxx:9092 
   listener.concurrency: 3 
   producer.batch-size: 1000 
   consumer.group-id: test-consumer-group 


@Configuration
@EnableKafka
public class KafkaConfiguration {

}


@Component
public class MsgProducer {

    private Logger log = LoggerFactory.getLogger(MsgProducer.class);

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    public void sendMessage(String topicName, String jsonData) {
        log.info("向kafka推送数据:[{}]", jsonData);
        try {
            kafkaTemplate.send(topicName, jsonData);
        } catch (Exception e) {
            System.out.println("发送数据出错!!!"+topicName+","+jsonData);
            System.out.println("发送数据出错=====>"+e);
            log.error("发送数据出错!!!{}{}", topicName, jsonData);
            log.error("发送数据出错=====>", e);
        }

        //消息发送的监听器,用于回调返回信息
        kafkaTemplate.setProducerListener(new ProducerListener<String, String>() {
            @Override
            public void onSuccess(String topic, Integer partition, String key, String value, RecordMetadata recordMetadata) {
                System.out.println("topic-----"+topic);
            }

            @Override
            public void onError(String topic, Integer partition, String key, String value, Exception exception) {
            }

            @Override
            public boolean isInterestedInSuccess() {
                log.info("数据发送完毕");
                System.out.println("数据发送完毕");
                return false;
            }
        });
    }
}
@Component
public class MsgConsumer {

        @KafkaListener(topics = {"myTest"})
        public void processMessage(String content) {
            System.out.println("消息被消费"+content);
        }
}

@RunWith(SpringRunner.class)
@SpringBootTest
public class TestKafka {

    @Autowired
    private MsgProducer msgProducer;

    @Test
    public void test() throws Exception {
        msgProducer.sendMessage("myTest", "myTest--------1");
    }
}

就是这样;运行test老是一直这个错误

qq_31371169
「已注销」 回复weixin_39588965: 求问解决了吗,我也遇到这个问题了
9 个月之前 回复
weixin_39588965
弟中弟种地 请问一下 解决了么 我也是遇到这个问题了
11 个月之前 回复

2个回答

SendMessage第二个参数用json字符串

"{\" name\":\"myTest--------1\"}

pengdingxu10
实例化 貌似也是不妥;也是那个错误。。。
一年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
spring boot 1.5集成 kafka 消费者怎么自己确认消费
spring boot 1.5集成 kafka 消费者怎么自己确认消费 怎么使用@KafkaListener注解实现Acknowledgment,即消费者怎么自己提交游标
flume采集kafka报错怎么解决
报错信息: Source.java:120)] Event #: 0 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 965 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 975 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 985 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 995 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 1006 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [ERROR - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:153)] KafkaSource EXCEPTION, {} java.lang.NullPointerException at org.apache.flume.instrumentation.MonitoredCounterGroup.increment(MonitoredCounterGroup.java:261) at org.apache.flume.instrumentation.kafka.KafkaSourceCounter.incrementKafkaEmptyCount(KafkaSourceCounter.java:49) at org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:146) at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:139) at java.lang.Thread.run(Thread.java:748) -------------------------------------------- 配置文件 kafkaLogger.sources = kaSource kafkaLogger.channels = memoryChannel kafkaLogger.sinks = kaSink # The channel can be defined as follows. kafkaLogger.sources.kaSource.channels = memoryChannel kafkaLogger.sources.kaSource.type= org.apache.flume.source.kafka.KafkaSource kafkaLogger.sources.kaSource.zookeeperConnect=192.168.130.4:2181,192.168.130.5:2181,192.168.130.6:2181 kafkaLogger.sources.kaSource.topic=dwd-topic kafkaLogger.sources.kaSource.groupId = 0 kafkaLogger.channels.memoryChannel.type=memory kafkaLogger.channels.memoryChannel.capacity = 1000 kafkaLogger.channels.memoryChannel.keep-alive = 60 kafkaLogger.sinks.kaSink.type = elasticsearch kafkaLogger.sinks.kaSink.hostNames = 192.168.130.6:9300 kafkaLogger.sinks.kaSink.indexName = flume_mq_es_d kafkaLogger.sinks.kaSink.indexType = flume_mq_es kafkaLogger.sinks.kaSink.clusterName = zyuc-elasticsearch kafkaLogger.sinks.kaSink.batchSize = 100 kafkaLogger.sinks.kaSink.client = transport kafkaLogger.sinks.kaSink.serializer = com.commons.flume.sink.elasticsearch.CommonElasticSearchIndexRequestBuilderFactory kafkaLogger.sinks.kaSink.serializer.parse = com.commons.log.parser.LogTextParser kafkaLogger.sinks.kaSink.serializer.formatPattern = yyyyMMdd kafkaLogger.sinks.kaSink.serializer.dateFieldName = time kafkaLogger.sinks.kaSink.channel = memoryChannel
kafka 集成 kerberos ,启动kafka报错
kafka 使用kerberos协议的时候,启动kakfa的时候报zookeeper校验不通过。 错误信息如下:![图片说明](https://img-ask.csdn.net/upload/201902/02/1549098295_69981.png) kerberos的用户密钥:![图片说明](https://img-ask.csdn.net/upload/201902/02/1549098396_900993.png) kerberos的etc/krb5.conf配置信息:[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EXAMPLE.COM default_tkt_enctypes = arcfour-hmac-md5 dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] EXAMPLE.COM = { kdc = 192.168.1.41 admin_server = 192.168.1.41 } [domain_realm] kafka = EXAMPLE.COM zookeeper = EXAMPLE.COM weiwei = EXAMPLE.COM 192.168.1.41 = EXAMPLE.COM 127.0.0.1 = EXAMPLE.COM kerberos 的var/kerberos/krb5kdc/kdc.conf的配置信息: [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] EXAMPLE.COM = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab max_renewable_life = 7d supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal } kafka的kafka_server_jaas.conf的配置信息: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/var/kerberos/krb5kdc/kafka.keytab" principal="kafka/weiwei@EXAMPLE.COM"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/var/kerberos/krb5kdc/kafka.keytab" principal="zookeeper/192.168.1.41@EXAMPLE.COM"; }; zookeeper_jaas.conf的配置信息: Server{ com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/var/kerberos/krb5kdc/kafka.keytab" principal="zookeeper/192.168.1.41@EXAMPLE.COM"; }; zookeeper.properties的新增配置信息: authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider requireClientAuthScheme=sasl jaasLoginRenew=3600000 server.properties 新增的配置信息: advertised.host.name=192.168.1.41 advertised.listeners=SASL_PLAINTEXT://192.168.1.41:9092 listeners=SASL_PLAINTEXT://192.168.1.41:9092 #listeners=PLAINTEXT://127.0.0.1:9093 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=GSSAPI sasl.enabled.mechanisms=GSSAPI sasl.kerberos.service.name=kafka zookeeper-server-start.sh 新增的配置信息 export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/home/shubei/Downloads/kafka_2.12-1.0.0/config/zookeeper_jaas.conf' kafka-server-start.sh 新增的配置信息: export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/home/shubei/Downloads/kafka_2.12-1.0.0/config/kafka_server_jaas.conf' 配置信息基本是这样,快过年了,小弟在线求救,再预祝大侠们新年快乐。 ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ```
kafka发送消息报错Exception thrown when sending a message with key='null'
![图片说明](https://img-ask.csdn.net/upload/201907/10/1562773073_882822.png) 使用jps 命令 [1] 9795 [root@localhost kafka]# jps 10099 Jps 8294 Kafka 9574 NamesrvStartup [1]+ Done bin/kafka-server-start.sh -daemon config/server.properties 发送接收消息的配置如下 1.appacliation.properties spring.kafka.bootstrap-servers=192.168.209.128:9092 spring.kafka.consumer.group-id=0 spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.batch-size=65536 spring.kafka.producer.buffer-memory=524288 2.java 发送接收消息 @GetMapping("/sendOrder") public String sendOrder(@RequestParam("orderId")String orderId) { kafkaTemplate.send("order",new Order(orderId)); return "success"; } @KafkaListener(topics = {"order"}) public void receiveOrder(ConsumerRecord<?,?>consumer) { logger.info("{}-{}:{}",consumer.topic(),consumer.key(),consumer.value()); } 刚开始我执行了一个发送string 消息的,结果就是死循环一样不停的打印Exception 之前的消息, 哪位大佬给解决下,多谢。
storm kafka整合报错,请大神帮忙看看啥情况
70308 [Thread-29-spout-read-kafka] INFO s.k.ZkCoordinator - Task [1/1] New partition managers: [Partition{host=pamshost02:9092, partition=9}, Partition{host=pamshost02:9092, partition=8}, Partition{host=pamshost02:9092, partition=7}, Partition{host=pamshost02:9092, partition=6}, Partition{host=pamshost02:9092, partition=5}, Partition{host=pamshost02:9092, partition=4}, Partition{host=pamshost02:9092, partition=3}, Partition{host=pamshost02:9092, partition=0}, Partition{host=pamshost02:9092, partition=1}, Partition{host=pamshost02:9092, partition=2}] 70599 [Thread-29-spout-read-kafka] INFO s.k.PartitionManager - Read partition information from: /detect/readKafka/partition_9 --> null 91326 [Thread-29-spout-read-kafka] INFO k.c.SimpleConsumer - Reconnect due to socket error: java.nio.channels.ClosedChannelException 91327 [Thread-29-spout-read-kafka] ERROR b.s.util - Async loop died! java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91329 [Thread-29-spout-read-kafka] ERROR b.s.d.executor - java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91535 [Thread-29-spout-read-kafka] ERROR b.s.util - Halting process: ("Worker died") java.lang.RuntimeException: ("Worker died") at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:336) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.6.0.jar:?] at backtype.storm.daemon.worker$fn__7184$fn__7185.invoke(worker.clj:532) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$mk_executor_data$fn__5523$fn__5524.invoke(executor.clj:261) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:489) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75]
springboot整合kafka,kafka已正常加载,但是consumer的listner不执行。
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.2.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.unitdream.kafka</groupId> <artifactId>consumer</artifactId> <version>0.0.1-SNAPSHOT</version> <name>consumer</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.0.0</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.54</version> </dependency> <!-- kafka --> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> ``` package com.unitdream.kafka.consumer.kafkaconfig; import com.unitdream.kafka.consumer.kafkaconfig.listener.KafkaConsumerListener; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.common.serialization.StringDeserializer; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.kafka.annotation.EnableKafka; import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory; import org.springframework.kafka.config.KafkaListenerContainerFactory; import org.springframework.kafka.core.ConsumerFactory; import org.springframework.kafka.core.DefaultKafkaConsumerFactory; import org.springframework.kafka.listener.ConcurrentMessageListenerContainer; import java.util.HashMap; import java.util.Map; @Configuration @EnableKafka public class KafkaConsumerConfig { public Map<String, Object> consumerProperties() { Map<String, Object> props = new HashMap<String, Object>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "n1:9092,n2:9092,n3:9092,n4:9092,es1:9092"); props.put(ConsumerConfig.GROUP_ID_CONFIG, "alarm"); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); return props; } @Bean public ConsumerFactory<String, String> consumerFactory() { return new DefaultKafkaConsumerFactory<String, String>(consumerProperties()); } @Bean public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory(); factory.setConsumerFactory(consumerFactory()); factory.setConcurrency(3); factory.getContainerProperties().setPollTimeout(3000); return factory; } @Bean public KafkaConsumerListener kafkaConsumerListener(){ return new KafkaConsumerListener(); } } ``` package com.unitdream.kafka.consumer.kafkaconfig.listener; import com.unitdream.kafka.consumer.services.MqKafkaAnalysisService; import com.unitdream.kafka.consumer.services.MqKafkaMataService; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.annotation.KafkaListener; public class KafkaConsumerListener { private static final Logger loger = LoggerFactory.getLogger(KafkaConsumerListener.class); @Autowired private MqKafkaMataService mqKafkaMataService; @Autowired private MqKafkaAnalysisService mqKafkaAnalysisService; @KafkaListener(topics = {"alarm"}) public void listener(ConsumerRecord<String, String> record) { loger.info("kafka消费[{}-{}]-{}的数据--------------------start",record.topic(),record.partition(),record.offset()); if (mqKafkaMataService.insertOne(record) > 0) { mqKafkaAnalysisService.insertOne(record); } loger.info("kafka消费[{}-{}]-{}的数据--------------------end",record.topic(),record.partition(),record.offset()); } }
spring-kafka做了分区,部分分区数据可以正常消费,部分分区始终无法消费?
spring-kafka(版本:1.0.6.RELEASE,kafka-client:0.9.0.1),创建了8个分区,有一个分区的数据始终无法消费,其他分区数据正常消费。有大神知道是为什么?
kafka执行报错报错,大神帮忙
Exception in thread "main" org.apache.spark.SparkException: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException org.apache.spark.SparkException: Couldn't find leader offsets for Set([ts_bg0_type0,0], [ts_bg0_type0,1], [ts_bg0_type0,2]) at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385) at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:385) at scala.util.Either.fold(Either.scala:98) at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:384) at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222) at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484) at cn.test.Kafka_Streaming_cv$.main(Kafka_Streaming_cv.scala:49) at cn.test.Kafka_Streaming_cv.main(Kafka_Streaming_cv.scala) ``` ```
springboot整合kafka超时的问题,错误提示如下所示
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 2019-10-10 00:06:20,395 ERROR [main] o.s.t.c.TestContextManager [TestContextManager.java:250] Caught exception while allowing TestExecutionListener [org.springframework.test.context.web.ServletTestExecutionListener@78e117e3] to prepare test instance [com.nowcoder.community.KafkaTests@9b3be1c] java.lang.IllegalStateException: Failed to load ApplicationContext ## 测试代码如下: ``` import org.apache.kafka.clients.consumer.ConsumerRecord; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.stereotype.Component; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringRunner; @RunWith(SpringRunner.class) @SpringBootTest @ContextConfiguration(classes = CommunityApplication.class) public class KafkaTests { @Autowired private KafkaProducer kafkaProducer; @Test public void testKafka() { kafkaProducer.sendMessage("test", "你好"); kafkaProducer.sendMessage("test", "在吗"); try { Thread.sleep(1000 * 10); } catch (InterruptedException e) { e.printStackTrace(); } } } @Component class KafkaProducer { @Autowired private KafkaTemplate kafkaTemplate; public void sendMessage(String topic, String content) { kafkaTemplate.send(topic, content); } } @Component class KafkaConsumer { @KafkaListener(topics = {"test"}) public void handleMessage(ConsumerRecord record) { System.out.println(record.value()); } } ``` 测试方法时报错,Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 求大佬看看。
spring与kafka的key值传入问题?
我想问一下,我发送消息的时候,发送消息的方法:;我传入一个key,xml里配置了序列化,报错: org.apache.kafka.common.errors.SerializationException: Can't convert key of class [B to class org.apache.kafka.common.serialization.StringSerializer specified in key.serializer,该怎么解决,谢谢
springboot kafka 怎么配置 max.request.size
项目使用 springboot + kafka,有一个消息的内容比较大,出现了下面的异常 The message is 1330537 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. 但是找了半天,不知道这个到底怎么配置,追踪了下源码也没看明白。哪位大神指导下? 其他的配置在 application.properties 中配置如下, spring.kafka.bootstrap-servers=**** spring.kafka.consumer.group-id=vprGroup spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.ByteArraySerializer spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.ByteArraySerializer 想按照上面的格式这么配也不行 spring.kafka.producer.max-request-size=2097152
kafka集群 报错 在线等
WARN [Controller-1-to-broker-2-send-thread], Controller 1's connection to broker Node(2, mine-28, 9092) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to Node(2, mine-28, 9092) failed at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:62) at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:58) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:106) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:129) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:225) at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:172) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
kafka的生产消费模型报错
## 消费者启动报错,实在不知道哪里错了,求大神。 生产者: ./kafka-console-producer.sh --broker-list S1PA11:9092,S1PA22:9092,S1PA33:9092 --topic AF_3 12 23 44 5 5 576 [2017-11-12 15:32:57,728] ERROR Error when sending message to topic AF_3 with key: null, value: 2 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) [2017-11-12 15:33:57,731] ERROR Error when sending message to topic AF_3 with key: null, value: 2 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) ^Cgeconline@S1PA33:~/kafka/kafka_2.10-0.9.0.0/bin$ [2017-11-12 15:37:14,884] INFO [Group Metadata Manager on Broker 3]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) 消费者: geconline@S1PA22:~/kafka/kafka_2.10-0.9.0.0/bin$ ./kafka-console-consumer.sh --zookeeper s1pa11:9092,s1pa22:9092,s1pa33:9092 --from-beginning --topic AF_3 No brokers found in ZK.
windows10下用命令行启动kafka 报错wmic不是内部或外部命令,环境变量,路径啥的都有,依然报错
改环境变量了的就别回答了,我已经看过这种解法很多了,别来浪费我的时间,,求大神帮忙解答这个问题,
kafka启动报错,java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V
试过很多方法,降级zk使其和kafka依赖的版本保持一致; zk3.414 ,kafka2.3 删除了scala的环境变量,依然不行; java_home只有一个 java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:238) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$2(ZooKeeperClient.scala:160) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253) at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1(ZooKeeperClient.scala:160) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1$adapted(ZooKeeperClient.scala:156) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:156) at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1660) at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1647) at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1642) at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1712) at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689) at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97) at kafka.server.KafkaServer.startup(KafkaServer.scala:262) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)
这个问题怎么解决,docker搭建kafka的wen'ti
首先说明这个错误的前提,我没有自己在虚拟机上搭建,因为华为送了服务器,我就直接在它的服务器上搭建了docker,弄了三个容器装了kafka,直接使用docker-compose搭建集群  映射的端口就是这样子,但是呢,在IDEA连接kafka集群的时候 首先连接IP:5000,5002,5004 再连接返回的host.name =kafka1,kafka2,kafka3 最后继续连接advertised.host.name=kafka1,kafka2,kafka3 这样的情况,如果是普通服务器还好,直接在本地hosts添加主机IP映射即可 但是这个容器就添加不了了,容器的IP地址是内网设定的,我们本地访问ip肯定访问不到了。 20/01/16 22:11:04 INFO AppInfoParser: Kafka version: 2.4.0 20/01/16 22:11:04 INFO AppInfoParser: Kafka commitId: 77a89fcf8d7fa018 20/01/16 22:11:04 INFO AppInfoParser: Kafka startTimeMs: 1579183864167 20/01/16 22:11:04 INFO KafkaConsumer: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Subscribed to topic(s): test, topicongbo 20/01/16 22:11:04 INFO Metadata: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Cluster ID: Kkwgy0gkSkmGAlsC_5cz9A 20/01/16 22:11:04 INFO AbstractCoordinator: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Discovered group coordinator kafka3:9092 (id: 2147483644 rack: null) 20/01/16 22:11:06 WARN NetworkClient: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Error connecting to node kafka3:9092 (id: 2147483644 rack: null) java.net.UnknownHostException: kafka3 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363) at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:572) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:757) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:737) at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204) at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:599) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:409) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:294) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.paranoidPoll(DirectKafkaInputDStream.scala:172) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:260) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7(DStreamGraph.scala:54) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7$adapted(DStreamGraph.scala:54) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:145) at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:974) at scala.collection.parallel.Task.$anonfun$tryLeaf$1(Tasks.scala:53) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at scala.util.control.Breaks$$anon$1.catchBreak(Breaks.scala:67) at scala.collection.parallel.Task.tryLeaf(Tasks.scala:56) at scala.collection.parallel.Task.tryLeaf$(Tasks.scala:50) at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:971) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:153) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 那么这个错误怎么解决的呢,而且华为的安全组我没有权限修改,只能5000-5010的端口对外开方
kafka和spring集成问题:Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties
springboot 和kafka集成提示Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties错误 详细如下: Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:507) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:367) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.buildLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:208) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.findLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:189) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(InitDestroyAnnotationBeanPostProcessor.java:128) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(CommonAnnotationBeanPostProcessor.java:297) ~[spring-context-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:1013) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:547) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 15 common frames omitted Caused by: java.lang.NoClassDefFoundError: org/springframework/kafka/listener/config/ContainerProperties at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_131] at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_131] at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_131] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:489) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 22 common frames omitted Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_131] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_131] ... 26 common frames omitted ``` ```
Spring集成kafka,消费者运行时内存占用会一直增长?
本人用Spring集成kafka消费者,发布运行时内存占用会一直升高,最后程序挂掉。请各位大神看看,提供解决方法 以下是我的配置文件 ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978342_260014.png) 程序运行两天后占用内存达到了1.4G,我用jmap导出程序占用文件,使用eclipsemat分析 ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978543_231966.png) ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978554_565464.png) 发现是这个org.springframework.kafka.listener.KafkaMessageListenerContainer这个类里面 ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978671_113331.png) 这个里面的LinkedBlockingQueue这个队列像是没释放一样。不知道是不是还需要配置什么东西,,一直找不到什么方法来解决。
windows环境下kafka创建topic后重启就报错
[2017-12-20 17:20:15,475] WARN Found a corrupted index file due to requirement failed: Corrupt index found, index file (D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index) has non-zero size but the last offset is 0 which is no larger than the base offset 0.}. deleting D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex, D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index, and D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.txnindex and rebuilding index... (kafka.log.Log) [2017-12-20 17:20:15,475] ERROR Error while loading log dir D:\program\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager) java.nio.file.FileSystemException: D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex: 另一 个程序正在使用此文件,进程无法访问。 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269) at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108) at java.nio.file.Files.deleteIfExists(Files.java:1165) at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:335) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:191) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788) at kafka.log.Log.loadSegmentFiles(Log.scala:297) at kafka.log.Log.loadSegments(Log.scala:406) at kafka.log.Log.<init>(Log.scala:203) at kafka.log.Log$.apply(Log.scala:1735) at kafka.log.LogManager.loadLog(LogManager.scala:231) at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:292) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
相见恨晚的超实用网站
相见恨晚的超实用网站 持续更新中。。。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
JavaScript 为什么能活到现在?
作者 | 司徒正美 责编 |郭芮 出品 | CSDN(ID:CSDNnews) JavaScript能发展到现在的程度已经经历不少的坎坷,早产带来的某些缺陷是永久性的,因此浏览器才有禁用JavaScript的选项。甚至在jQuery时代有人问出这样的问题,jQuery与JavaScript哪个快?在Babel.js出来之前,发明一门全新的语言代码代替JavaScript...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
Python 编程开发 实用经验和技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法和技巧,包括小数保留指定位小数、判断变量的数据类型、类方法@classmethod、制表符中文对齐、遍历字典、datetime.timedelta的使用等,会持续更新......
吐血推荐珍藏的Visual Studio Code插件
作为一名Java工程师,由于工作需要,最近一个月一直在写NodeJS,这种经历可以说是一部辛酸史了。好在有神器Visual Studio Code陪伴,让我的这段经历没有更加困难。眼看这段经历要告一段落了,今天就来给大家分享一下我常用的一些VSC的插件。 VSC的插件安装方法很简单,只需要点击左侧最下方的插件栏选项,然后就可以搜索你想要的插件了。 下面我们进入正题 Material Theme ...
实战:如何通过python requests库写一个抓取小网站图片的小爬虫
有点爱好的你,偶尔应该会看点图片文字,最近小网站经常崩溃消失,不如想一个办法本地化吧,把小照片珍藏起来! 首先,准备一个珍藏的小网站,然后就可以开始啦! 第一步 我们先写一个获取网站的url的链接,因为url常常是由page或者,其他元素构成,我们就把他分离出来,我找到的网站主页下有图片区 图片区内有标题页,一个标题里有10张照片大概 所以步骤是: 第一步:进入图片区的标题页 def getH...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
相关热词 c# 二进制截断字符串 c#实现窗体设计器 c#检测是否为微信 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片
立即提问