kafka里面关于生产者SDK如何推送消息给kafka集群 5C

kafka里面关于生产者SDK如何推送消息给kafka集群,求各位给解答一下,谢谢

2个回答

a874909657
a874909657 并没有,整个环境我早就搭出来了
2 年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
【kafka】kafka消息生产者抛异常

服务器正常运行,kafka本来没有问题,从某一时刻开始,就一直抛异常了,重启服务器后恢复正常。 请问是什么原因? 绝大部分消息抛异常了,还有很小一部分没有抛异常,但也没收到,这些丢失但没有异常的又是什么原因? 已知情况:kafka消息生产者 没有指定partition ![kafka消息生产者异常](https://img-ask.csdn.net/upload/201804/28/1524849596_87543.png)

kafka消费者无法消费信息

在生产环境部署kafka集群和消费者服务器后,通过logstash向kafka集群发送实时日志,消费者也能正常消费信息。但是两分钟之后消费者就停止消费信息了,想问下各位老师如何排查问题点在哪里。 1:查看了kafka服务器的日志,logstash还在向kafka推实时日志,kafka集群日志留存时间是两个小时。 2:kafka消费者一共有两台,两台都在同时运行。 3:kafka集群有三台服务器,查问题的时候发现,kafka消费者只连接到了一台broker上,不知道这是不是原因所在。

Java接收kafka秒级推送数据不均匀

本应Kafka一秒推送一次数据,java这边一秒计算入库一次。可是发现如果有延迟,Kafka一秒钟推送了两次数据,大批量的数据下,java计算数据存在个别误差,请问java这边该怎么处理。双十一大屏滚动这种。

在docker容器里kafka生产者(java客户端)消息发送不出去!!

在我们项目实施过程,部署在docker容器里应用需要用kafka客户端发送不出消息,但是同样应用部署在物理机(该容器所在物理机)上,kafka客户端可以发送消息;然后我们在docker容器里的kafka客户端生产者增加调整两个参数linger.ms、batch.size,这样在容器里kafa客户端就可以发送。现在不知道在容器里和在物理机上对于linger.ms、batch.size这两个默认参数有什么区别?

kafka1.0.0的client,生产者生产数据失败

配置: ``` Properties props = new Properties(); //broker地址 props.put("bootstrap.servers", "39.108.61.252:9092,39.108.61.252:9093,39.108.61.252:9094"); //请求时候需要验证 props.put("acks", "0"); //请求失败时候需要重试 props.put("retries", 1); //生产者就会尝试将记录组合成一个batch的请求。 这有助于客户端和服务器的性能。不能大于此默认值,否则浪费内存,反而降低吞吐量 //props.put("batch.size", 16384); //汇聚一定时间内的记录一起发出 // props.put("linger.ms", 50); //内存缓存区大小 props.put("buffer.memory", 33554432); //指定消息key序列化方式 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //指定消息本身的序列化方式 props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<>(props); ``` 生产数据 没有Thread.sleep()就不能成功发送数据!,有了就可已在消费者端接受到数据。若把Thread.sleep()删除,在生产末尾加上close()方法也能成功生产 ``` for (int i = 0; i < 10; i++) { try { Thread.sleep(50); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } kafkaProducer.send(new ProducerRecord<>("topic_user_general_info_update", "simpleKey", "value-"+i)); } ``` 困扰很久,不知道配置还是哪里问题。

kafka集群是否启动成功?

部署完kafka集群后,启动时kafka前台启动和后台启动有什么不同?

kafka 集群映射端口到外网,外网无法生产消息

有3台kafka ip分别是内网ip 192.168.2.21 9092 192.168.2.22 9092 192.168.2.23 9092 另有一台同时接通了内网和互联网的机器 (内网ip 192.168.2.13,外网ip 172.50.63.1), 现在用这台机器做了端口映射,用的rinetd 0.0.0.0 9092 192.168.2.21 9092 0.0.0.0 9093 192.168.2.22 9092 0.0.0.0 9094 192.168.2.23 9092 然后在自己开发电脑上用java代码生产消息 topic=ffff ``` java props.put("bootstrap.servers", "172.50.63.1:9092"); ``` 会产生如下报错 ``` java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for ffff-0 at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:65) at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52) at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25) at Producer.run(Producer.java:140) Caused by: org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for ffff-0 ``` 这是我的kafka配置 ``` listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://192.168.2.13:9092 ``` 现在的疑问是 这个配置应该写什么?才能让外网能够正常生产 advertised.listeners应该写互联网ip 172.50.63.1 还是做映射的那台内网ip 192.168.2.13

kafka 集群高可用测试

先搭建了3个kafka集群 3台都启动 并且正常发送消息后 测试 某一台挂了情况 ``` #当前机器在集群中的唯一标识,和zookeeper的myid性质一样 broker.id=0 #当前kafka对外提供服务的端口默认是9092 listeners=PLAINTEXT://192.168.1.252:9092 #这个是borker进行网络处理的线程数 num.network.threads=3 #这个是borker进行I/O处理的线程数 num.io.threads=8 #发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能 socket.send.buffer.bytes=102400 #kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘 socket.receive.buffer.bytes=102400 #这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小 socket.request.max.bytes=104857600 ############################# Log Basics ############################# #消息存放的目录,这个目录可以配置为","逗号分割的表达式,上面的num.io.threads要大于这个目录的个数这个目录, #如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个 log.dirs=/usr/local/kafka_2.11-1.1.0/data/kafka/logs #默认的分区数,一个topic默认1个分区数 num.partitions=3 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor=1 # transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 #默认消息的最大持久化时间,168小时,7天 log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes=1073741824 #这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件 log.segment.bytes=1073741824 #每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除 log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# #设置zookeeper的连接端口 zookeeper.connect=192.168.1.252:2181,192.168.1.253:2181,192.168.1.254:2181 # 超时时间 zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0 ``` 创建了 主题 ![图片说明](https://img-ask.csdn.net/upload/201805/15/1526363032_696232.png) 然后 发送消息 ![图片说明](https://img-ask.csdn.net/upload/201805/15/1526363075_21045.png) msg1 msg2 然后 关闭其中一台kafka ![图片说明](https://img-ask.csdn.net/upload/201805/15/1526363143_679462.png) 控制台打印 ``` 2018/05/15-13:48:03 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator- Marking the coordinator 192.168.1.254:9092 (id: 2147483645 rack: null) dead for group defaultGroup 2018/05/15-13:48:03 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator- Discovered coordinator 192.168.1.254:9092 (id: 2147483645 rack: null) for group defaultGroup. 2018/05/15-13:48:04 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator- Marking the coordinator 192.168.1.254:9092 (id: 2147483645 rack: null) dead for group defaultGroup ``` 然后继续发消息 msg3 控制台无响应 继续发送msg4 无响应 重启那台kafka 控制台打印 ``` "msg4" "msg3" 2018/05/15-13:52:11 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] WARN org.apache.kafka.clients.consumer.internals.Fetcher- Received unknown topic or partition error in fetch for partition trading-1. The topic/partition may not exist or the user may not have Describe access to it 2018/05/15-13:52:11 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator- Marking the coordinator 192.168.1.254:9092 (id: 2147483645 rack: null) dead for group defaultGroup 2018/05/15-13:52:11 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] WARN org.apache.kafka.clients.consumer.internals.ConsumerCoordinator- Auto offset commit failed for group defaultGroup: Offset commit failed with a retriable exception. You should retry committing offsets. 2018/05/15-13:52:11 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator- Discovered coordinator 192.168.1.254:9092 (id: 2147483645 rack: null) for group defaultGroup. ``` kafka 不是主从复制么 不是谁挂了 用另一个么

连接到Kafka后Golang消费者延迟接收Kafka消息

<div class="post-text" itemprop="text"> <p><em>I'm new to Golang and Kafa so this might seem like a silly question.</em></p> <p>After my Kafka consumer first connects to the Kafka server, why is there a delay (~ 20 secs) between establishing connection to the Kafka server, and receiving the first message?</p> <p>It prints a message right before <code>consumer.Messages()</code> and print another message for each message received. The ~20 sec delay is between the first <code>fmt.Println</code> and second <code>fmt.Println</code>.</p> <pre><code>package main import ( "fmt" "github.com/Shopify/sarama" cluster "github.com/bsm/sarama-cluster" ) func main() { // Create the consumer and listen for new messages consumer := createConsumer() // Create a signal channel to know when we are done done := make(chan bool) // Start processing messages go func() { fmt.Println("Start consuming Kafka messages") for msg := range consumer.Messages() { s := string(msg.Value[:]) fmt.Println("Msg: ", s) } }() &lt;-done } func createConsumer() *cluster.Consumer { // Define our configuration to the cluster config := cluster.NewConfig() config.Consumer.Return.Errors = false config.Group.Return.Notifications = false config.Consumer.Offsets.Initial = sarama.OffsetOldest // Create the consumer brokers := []string{"127.0.0.1:9092"} topics := []string{"orders"} consumer, err := cluster.NewConsumer(brokers, "my-consumer-group", topics, config) if err != nil { log.Fatal("Unable to connect consumer to Kafka") } go handleErrors(consumer) go handleNotifications(consumer) return consumer } </code></pre> <p><strong>docker-compose.yml</strong></p> <pre><code>version: '2' services: zookeeper: image: "confluentinc/cp-zookeeper:5.0.1" hostname: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker-1: image: "confluentinc/cp-enterprise-kafka:5.0.1" hostname: broker-1 depends_on: - zookeeper ports: - "9092:9092" environment: KAFKA_BROKER_ID: 1 KAFKA_BROKER_RACK: rack-a KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1 KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://127.0.0.1:9092' KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter KAFKA_DELETE_TOPIC_ENABLE: "true" KAFKA_JMX_PORT: 9999 KAFKA_JMX_HOSTNAME: 'broker-1' KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092 CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181 CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1 CONFLUENT_METRICS_ENABLE: 'true' CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous' KAFKA_CREATE_TOPICS: "orders:1:1" </code></pre> </div>

kafka 消费者 获取消息

activeqmq 都是broker push 到 消费者,消费者 建立 messageListener 监听器 就可以 获取消息,但kafka 是 需要去broker pull消息, 怎么才能知道 broker中 已经 有了对应 topic 呢 ?定时 获取?

kafka在一台机器上模拟集群,为什么关闭一个broker,整体集群就不能消费了,求大神解答

kafka在一台机器上模拟集群,为什么关闭一个broker,整体集群就不能消费了。消费就一直卡主等待的状况。 环境: 1.一台机器运行了一个zookeeper,启动文件是在kafka/bin/zookeeper-server-start.sh 2.分别把kafka文件件复制了四份,改变里面server.properties中borker.id分别为1,2,3,4 3.启动机器,正常可以消费topic是cxl的 4.关闭borker.id是1的kafka服务,就没有办法消费cxl topic的信息 ![图片说明](https://img-ask.csdn.net/upload/201809/12/1536720817_648848.png) 现在就不太明白正常集群挂掉一个,整体集群应该是不受影响的,从topic信息上看我的分片也在borker 2上,而我关掉的是broker1应该没有什么影响,但是现在为什么不能消费了?

kafka集群内存使用不均问题

kafka集群三台服务器(内存都是33G),按照集群模式配置启动,有两台内存只用到3G,另一台却用到了32G,并没有查看到消息积压,这是啥因呢?

kafka生产者发消息时无法更新元数据

十万火急,求助 kafka的broker能够正常启动,zookeeper没有问题,使用生产者发送消息失败,提示获取元数据失败 [root@newkafka2 ~]# /root/kafka2/bin/kafka-console-producer.sh -broker-list server2:9092 -topic newkafka2 >111111111111 [2017-10-20 15:42:14,048] ERROR Error when sending message to topic newkafka2 with key: null, value: 12 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. >[2017-10-20 15:43:21,289] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2017-10-20 15:45:28,516] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2017-10-20 15:47:35,876] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) ![图片说明](https://img-ask.csdn.net/upload/201710/20/1508485750_814271.png)

docker 安装kafka一直无法测试生产者生产消息成功

# 1.docker 安装kafka一直无法测试生产者生产消息成功 docker安装完kafka之后,创建topic没有报错 bash-4.4# kafka-topics.sh --create --zookeeper 118.190.26.133:2181 --replication-factor 1 --partitions 1 --topic mytest_topic WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both. Created topic mytest_topic. 但是运行生产者时候出现了一下错误 bash-4.4# kafka-console-producer.sh --broker-list 118.190.26.133:9092 --topic mytest_topic >[2020-04-20 02:50:33,086] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,087] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,175] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,175] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,277] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,277] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,480] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,480] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,934] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,934] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:34,741] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 以下是server.propertise配置文件信息 connect-distributed.properties connect-log4j.properties consumer.properties server.properties zookeeper.properties bash-4.4# vi server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 advertised.host.name=master ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 listeners=PLAINTEXT://0.0.0.0:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). advertised.listeners=PLAINTEXT://118.190.26.133:9092

Kafka Consumer 拉取消息

基于0.10,最近在测试consumer端消费集群消息, 设置“一次最大拉取的条数”的参数,但是实际拉取的条数不唯一,如果将其设置成500或600,那么每次拉取的条数就是500或600定值,但是如果设置成1W,那么拉取条数在4k-1W不等,(每条消息的大小是1KB) 所以想请问下,consumer拉取的具体的机制是什么样的,为什么会出现每次拉取的条数是不一样的? 注:消息是之前就已经写入好partition中的。

kafka集群中的相关问题

请问一下,一个已经运行中且启用了SASL_PLAINTEXT的kafka集群,现在想在broker的jaas文件中增加user要怎么办?难道只能加了以后一个个重启broker吗?

从我的边缘节点推送kafka消息的最佳方法是什么?

<div class="post-text" itemprop="text"> <p>I have a worker in the primary region (US-East) that computes data on traffic at our edge locations. I want to push the data from an edge region to our primary kafka region. </p> <p>An example is Poland, Australia, US-West. I want to push all these stats to US-East. I don't want to encurr additional latency during the writes from the edge regions to the primary. </p> <p>Another option is to create another kafka cluster and worker that acts as a relay. That would require us to maintain individual clusters in each region and would add a lot more complexity to our deployments.</p> <p>I've seen Mirror Maker, but I don't really want to Mirror anything, I guess I'm looking more for a relay system. If this isn't the designed way to do this, how can I aggregate all of our application metrics to the primary region to be computed and sorted?</p> <p>Thank you for your time.</p> </div>

启动kafka 消费者有时获取不到消息

kafka消费者启动的时候有时候不能获取到消息,但是重启后就可以了,有时候还要重启好多次。。。不知道是为什么,希望大神能指导一下。 [ INFO ] [2016-09-29 14:34:53] org.hibernate.validator.internal.util.Version [30] - HV000001: Hibernate Validator 5.2.4.Final [ INFO ] [2016-09-29 14:34:53] com.coocaa.salad.stat.ApplicationMain [48] - Starting ApplicationMain on zhuxiang with PID 1740 (D:\IdeaProjects\green-salad\adx-stat\target\classes started by zhuxiang in D:\IdeaProjects\green-salad) [ INFO ] [2016-09-29 14:34:53] com.coocaa.salad.stat.ApplicationMain [663] - The following profiles are active: dev [ INFO ] [2016-09-29 14:34:54] org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext [581] - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@5754de72: startup date [Thu Sep 29 14:34:54 CST 2016]; root of context hierarchy 2016-09-29 14:34:55 JRebel: Monitoring Spring bean definitions in 'D:\IdeaProjects\green-salad\adx-stat\target\classes\spring-integration-consumer.xml'. [ INFO ] [2016-09-29 14:34:55] org.springframework.beans.factory.xml.XmlBeanDefinitionReader [317] - Loading XML bean definitions from URL [file:/D:/IdeaProjects/green-salad/adx-stat/target/classes/spring-integration-consumer.xml] [ INFO ] [2016-09-29 14:34:56] org.springframework.beans.factory.config.PropertiesFactoryBean [172] - Loading properties file from URL [jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties] 2016-09-29 14:34:56 JRebel: Monitoring properties in 'jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties'. [ INFO ] [2016-09-29 14:34:56] org.springframework.integration.config.IntegrationRegistrar [330] - No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created. [ INFO ] [2016-09-29 14:34:56] org.springframework.beans.factory.support.DefaultListableBeanFactory [843] - Overriding bean definition for bean 'kafkaConsumerService' with a different definition: replacing [Generic bean: class [com.coocaa.salad.stat.service.KafkaConsumerService]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in file [D:\IdeaProjects\green-salad\adx-stat\target\classes\com\coocaa\salad\stat\service\KafkaConsumerService.class]] with [Generic bean: class [com.coocaa.salad.stat.service.KafkaConsumerService]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in URL [file:/D:/IdeaProjects/green-salad/adx-stat/target/classes/spring-integration-consumer.xml]] [ INFO ] [2016-09-29 14:34:57] org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor [130] - No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created. [ INFO ] [2016-09-29 14:34:57] org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor [158] - No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created. [ INFO ] [2016-09-29 14:34:57] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [class org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$3dea2e76] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) [ INFO ] [2016-09-29 14:34:58] org.springframework.beans.factory.config.PropertiesFactoryBean [172] - Loading properties file from URL [jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties] 2016-09-29 14:34:58 JRebel: Monitoring properties in 'jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties'. [ INFO ] [2016-09-29 14:34:58] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'integrationGlobalProperties' of type [class org.springframework.beans.factory.config.PropertiesFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) [ INFO ] [2016-09-29 14:34:58] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'integrationGlobalProperties' of type [class java.util.Properties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) [ INFO ] [2016-09-29 14:34:59] org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer [88] - Tomcat initialized with port(s): 8081 (http) spring-integration-consumer.xml内容如下: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration" xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka" xmlns:task="http://www.springframework.org/schema/task" xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka-1.0.xsd http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd"> <!-- topic test conf --> <int:channel id="inputFromKafka"> <int:dispatcher task-executor="kafkaMessageExecutor"/> </int:channel> <!-- zookeeper配置 可以配置多个 --> <int-kafka:zookeeper-connect id="zookeeperConnect" zk-connect="172.20.135.95:2181,172.20.135.95:2182" zk-connection-timeout="10000" zk-session-timeout="10000" zk-sync-time="2000"/> <!-- channel配置 auto-startup="true" 否则接收不发数据 --> <int-kafka:inbound-channel-adapter id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext" auto-startup="true" channel="inputFromKafka"> <int:poller fixed-delay="1" time-unit="MILLISECONDS"/> </int-kafka:inbound-channel-adapter> <task:executor id="kafkaMessageExecutor" pool-size="8" keep-alive="120" queue-capacity="500"/> <bean id="kafkaDecoder" class="org.springframework.integration.kafka.serializer.common.StringDecoder"/> <bean id="consumerProperties" class="org.springframework.beans.factory.config.PropertiesFactoryBean"> <property name="properties"> <props> <prop key="auto.offset.reset">smallest</prop> <prop key="socket.receive.buffer.bytes">10485760</prop> <!-- 10M --> <prop key="fetch.message.max.bytes">5242880</prop> <prop key="auto.commit.interval.ms">1000</prop> <prop key="auto.commit.enables">true</prop> </props> </property> </bean> <!-- 消息接收的BEEN --> <bean id="kafkaConsumerService" class="com.coocaa.salad.stat.service.KafkaConsumerService"/> <!-- 指定接收的方法 --> <int:outbound-channel-adapter channel="inputFromKafka" ref="kafkaConsumerService" method="processMessage"/> <int-kafka:consumer-context id="consumerContext" consumer-timeout="1000" zookeeper-connect="zookeeperConnect" consumer-properties="consumerProperties"> <int-kafka:consumer-configurations> <int-kafka:consumer-configuration group-id="group-4" value-decoder="kafkaDecoder" key-decoder="kafkaDecoder" max-messages="5000"> <!-- 两个TOPIC配置 --> <int-kafka:topic id="clientsRequests2" streams="4"/> <!--<int-kafka:topic id="sunneytopic" streams="4" />--> </int-kafka:consumer-configuration> </int-kafka:consumer-configurations> </int-kafka:consumer-context> </beans> kafka版本0.10的

kafka集群 报错 在线等

WARN [Controller-1-to-broker-2-send-thread], Controller 1's connection to broker Node(2, mine-28, 9092) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to Node(2, mine-28, 9092) failed at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:62) at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$1.apply(NetworkClientBlockingOps.scala:58) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:106) at kafka.utils.NetworkClientBlockingOps$$anonfun$kafka$utils$NetworkClientBlockingOps$$pollUntil$extension$2.apply(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:129) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105) at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:225) at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:172) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

点沙成金:英特尔芯片制造全过程揭密

“亚马逊丛林里的蝴蝶扇动几下翅膀就可能引起两周后美国德州的一次飓风……” 这句人人皆知的话最初用来描述非线性系统中微小参数的变化所引起的系统极大变化。 而在更长的时间尺度内,我们所生活的这个世界就是这样一个异常复杂的非线性系统…… 水泥、穹顶、透视——关于时间与技艺的蝴蝶效应 公元前3000年,古埃及人将尼罗河中挖出的泥浆与纳特龙盐湖中的矿物盐混合,再掺入煅烧石灰石制成的石灰,由此得来了人...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

外包程序员的幸福生活

今天给你们讲述一个外包程序员的幸福生活。男主是Z哥,不是在外包公司上班的那种,是一名自由职业者,接外包项目自己干。接下来讲的都是真人真事。 先给大家介绍一下男主,Z哥,老程序员,是我十多年前的老同事,技术大牛,当过CTO,也创过业。因为我俩都爱好喝酒、踢球,再加上住的距离不算远,所以一直也断断续续的联系着,我对Z哥的状况也有大概了解。 Z哥几年前创业失败,后来他开始干起了外包,利用自己的技术能...

C++11:一些微小的变化(新的数据类型、template表达式内的空格、nullptr、std::nullptr_t)

本文介绍一些C++的两个新特性,它们虽然微小,但对你的编程十分重要 一、Template表达式内的空格 C++11标准之前建议在“在两个template表达式的闭符之间放一个空格”的要求已经过时了 例如: vector&lt;list&lt;int&gt; &gt;; //C++11之前 vector&lt;list&lt;int&gt;&gt;; //C++11 二、nullptr ...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

深入剖析Springboot启动原理的底层源码,再也不怕面试官问了!

大家现在应该都对Springboot很熟悉,但是你对他的启动原理了解吗?

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

【阿里P6面经】二本,curd两年,疯狂复习,拿下阿里offer

二本的读者,在老东家不断学习,最后逆袭

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《经典算法案例》01-08:如何使用质数设计扫雷(Minesweeper)游戏

我们都玩过Windows操作系统中的经典游戏扫雷(Minesweeper),如果把质数当作一颗雷,那么,表格中红色的数字哪些是雷(质数)?您能找出多少个呢?文中用列表的方式罗列了10000以内的自然数、质数(素数),6的倍数等,方便大家观察质数的分布规律及特性,以便对算法求解有指导意义。另外,判断质数是初学算法,理解算法重要性的一个非常好的案例。

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

微软为一人收购一公司?破解索尼程序、写黑客小说,看他彪悍的程序人生!...

作者 | 伍杏玲出品 | CSDN(ID:CSDNnews)格子衬衫、常掉发、双肩包、修电脑、加班多……这些似乎成了大众给程序员的固定标签。近几年流行的“跨界风”开始刷新人们对程序员的...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

我说我懂多线程,面试官立马给我发了offer

不小心拿了几个offer,有点烦

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

立即提问
相关内容推荐