kafka Consumer消费数据手动commit问题 5C

Consumer设置成手动提交enable.auto.commit=false,但是在处理完消息过后没有进行consumer.commitAsync()。按照我的理解此时消费的offset是没有更新的,如果第一次producer发了五条信息,Consumer处理了这五条信息,第二次Producer又发五条信息,此时Consumer poll数据的时候应该是第一次和第二次相加的10条数据(因为消费的offset没有更新,Consumer应该从第一次发送的数据进行poll),但是我测试的结果是还是五条(Consumer没有重启,一直启动的,producer发多少条消息,Consumer就消费多少条消息)。
我的疑问就是,既然没有commit最新的 offset,那么为什么producer发送新发送的消息,Consumer就能接收到,而不是从原来的offset poll数据。但是,如果重启一下Consumer,poll的数据就是10条。再重启也是最新的没有更新offset的那10条数据。

5个回答

比如,第一次发了1、2、3、4、5,第二次发送6、7、8、9、10。Consumer接收并打印出来的消息是:1、2、3、4、5、6、7、8、9、10(忽略顺序),而我的预期应该是:1、2、3、4、5、1、2、3、4、5、6、7、8、9、10

m0_37971707
m0_37971707 您好,我现在也想业务失败时,不提交offset,重复消费这条消息,您这边是怎么做的呢?
大约一年之前 回复
weixin_41675543
weixin_41675543 比如,第一次发了1、2、3、4、5,第二次发送6、7、8、9、10。Consumer接收并打印出来的消息是:1、2、3、4、5、6、7、8、9、10(忽略顺序),而我的预期应该是:1、2、3、4、5、1、2、3、4、5、6、7、8、9、10
大约 2 年之前 回复
qq_28263897
无风三尺浪 回复雷小涛的摸爬滚打: 因为不同的consumer他们的offset是不一样的
大约一年之前 回复
m0_37971707
m0_37971707 您好,我现在也想业务失败时,不提交offset,重复消费这条消息,您这边是怎么做的呢?
大约一年之前 回复
rao1212125
rao1212125 回复qq_36768025: 大哥喝可乐
一年多之前 回复
qq_36768025
刘醒 回复LeiXiaoTao_Java: Kafka本地和服务器都会保存一份Offset,本地Pull之后就会更新Offset(没有提交到服务器),所以是不会获取到之前的消息的,而同组的其他Consumer加入时会从服务器拉取offset
一年多之前 回复
LeiXiaoTao_Java
雷小涛的摸爬滚打 然后会进入一个循环,从源代码中可以看到如果auto.commit被关掉的话, 他会先把之前处理过的数据先进行提交offset,然后再去从kafka里面取数据。----首先感谢作者,文章写的非常好。原文说spring-kafka在取数据之前会提交之前的offset,然后再取新数据,既然提交了最新的offset,为什么新开一个Consumer又会从offset提交之前的位置读取数据喃。
接近 2 年之前 回复

偏移量已经提交了,再好好研究研究,不要想当然

  1. consumer 消费默认的offset 是latest, 就是当前的offset, 如果要查看所有的, 可以指定 offset 为 earlist.
  2. produce message 是追加到 topic 里面的.

因为kafka的offset下标的记录实际会有两份,服务端会自己记录一份,本地的消费者客户端也会记录一份,所以Consumer接收并打印出来的消息是:1、2、3、4、5、6、7、8、9、10。如果两次拉取之间有个重启,consumer本地的记录就会消失。得到的信息就是 1、2、3、4、5、1、2、3、4、5、6、7、8、9、10。
这时,如果有多个线程的话,每一个线程都会消费重复全部的消息。就好像只有一个线程一样。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
kafka consumer offset 怎么用java代码手动设置commit
如题,怎么用Java代码在consumer端,手动设置消息offset ​ commit​​
kafka通过consumer java api实现消费者,KafkaStream打印不出来数据
kafka2.2.0 通过consumer java api实现消费者,KafkaStream打印不出来数据 ``` package kafka; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import kafka.consumer.Consumer; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; public class KafkaConsumerTest extends Thread { //在linux环境运行正常 @Override public void run() { // TODO Auto-generated method stub String topic="powerTopic"; Properties pro=new Properties(); pro.put("zookeeper.connect", "10.2.2.61:2181,10.2.2.62:2181,10.2.2.63:2181"); pro.put("group.id", "test"); // pro.put("zookeeper.session.timeout.ms", "4000"); // pro.put("consumer.timeout.ms", "-1"); ConsumerConfig paramConsumerConfig=new ConsumerConfig(pro); ConsumerConnector cosumerConnector=Consumer.createJavaConsumerConnector(paramConsumerConfig); Map<String, Integer> paramMap=new HashMap<String, Integer>(); paramMap.put(topic,1); Map<String, List<KafkaStream<byte[], byte[]>>> messageStream=cosumerConnector.createMessageStreams(paramMap); KafkaStream<byte[], byte[]> kafkastream=messageStream.get(topic).get(0); // System.out.println(kafkastream.size()); System.out.println("hello"); ConsumerIterator<byte[], byte[]> iterator=kafkastream.iterator(); while(iterator.hasNext()){ // MessageAndMetadata<byte[], byte[]> message=iterator.next(); // String topic1=message.topic(); String msg=new String(iterator.next().message()); System.out.println(msg); } } public static void main(String[] args) { // TODO Auto-generated method stub new KafkaConsumerTest().start(); new MyProducer01().start(); } } ``` kafka环境在centos操作系统,在windows系统的eclipse运行程序,打印不出来数据,也不结束报错: ![图片说明](https://img-ask.csdn.net/upload/201912/20/1576807897_599340.jpg) 打包后在集群环境运行结果: ![图片说明](https://img-ask.csdn.net/upload/201912/20/1576808400_74063.jpg)
Kafka Consumer应用的问题
Kafka Consumer可以支持按Key消费信息么?不支持的话下面的场景可以通过一种什么思路实现呢? 应用场景:文件中转 detail:一个文件有不同的版本,均通过producer入broker,消费时取出特定版本的文件
kafka消费不到数据问题
kafka集群搭建正常,通过console都能正常生产和消费消息,但是通过JAVA程序就是读取不到消息,更换group都尝试过了 package test; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import kafka.consumer.Consumer; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; public class KafkaConsumer extends Thread { private String topic; public KafkaConsumer(String topic){ super(); this.topic=topic; } @Override public void run(){ //通过properties设置了Consumer的参数,并且创建了连接器,连接到Kafaka ConsumerConnector consumer = createConsumer(); //Map作用指定获取的topic以及partition Map<String,Integer> topicCountMap = new HashMap<String,Integer>(); topicCountMap.put(topic, 3); //consumer连接器获取消息 Map<String,List<KafkaStream<byte[],byte[]>>> messageStreams = consumer.createMessageStreams(topicCountMap); //获取对应的topic中的某一个partition中的数据 KafkaStream<byte[],byte[]> kafkaStream = messageStreams.get(topic).get(0); ConsumerIterator<byte[], byte[]> iterator = kafkaStream.iterator(); while(iterator.hasNext()){ byte[] message = iterator.next().message(); System.out.println("message is:"+new String(message)); } } private ConsumerConnector createConsumer(){ Properties properties = new Properties(); properties.put("zookeeper.connect", "XXX:2181"); properties.put("auto.offset.reset", "smallest");//读取旧数据 properties.put("group.id", "333fcdcd"); return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties)); } public static void main(String[] args) { new KafkaConsumer("testtest").start(); } }
kafka consumer 提交offset 如何查看?
现在发现一个问题,我本地kafka消费了 但是却没有提交offset 导致每次一重启就重新开始消费! 请问哪位大神知道该如何查看我是否有提交offset 或者说怎么排查这个问题?我kafak consumer 配置如下: props.put("zookeeper.session.timeout.ms", "10000"); props.put("zookeeper.connection.timeout.ms", "6000"); props.put("zookeeper.sync.time.ms", "2000"); props.put("auto.commit.interval.ms", "5000"); props.put("auto.offset.reset", "smallest"); props.put("auto.commit.enable", "false");
Kafka Consumer 拉取消息
基于0.10,最近在测试consumer端消费集群消息, 设置“一次最大拉取的条数”的参数,但是实际拉取的条数不唯一,如果将其设置成500或600,那么每次拉取的条数就是500或600定值,但是如果设置成1W,那么拉取条数在4k-1W不等,(每条消息的大小是1KB) 所以想请问下,consumer拉取的具体的机制是什么样的,为什么会出现每次拉取的条数是不一样的? 注:消息是之前就已经写入好partition中的。
kafka_2.11-1.0.0在控制台kafka-console-consumer消费者消费的时候zookeeper 和 bootstrap-server区别
今天试了两个kafka的版本都存在这个问题 1、创建一个topic > kafka-topics.bat --create --partitions 1 --replication-factor 1 --topic test --zookeepe r localhost:2181 2、对改topic进行消息写入 > kafka-console-producer.bat --broker-list localhost:9092 --topic test 3,控制台形式消费该topic消息,--zookeeper localhost:2181 这种能正常消费消息 > kafka-console-consumer.bat --zookeeper localhost:2181 --topic test --from-beginning 4,同样是控制台消费,--bootstrap-server localhost:9092,这样就收不到消费消息 > kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginn ing 5,今天在windows 和 虚拟机linux环境下 都存在这个问题,而且也试了两个kafka版本 6,stackoverflow也看到有人出现这个问题 > https://stackoverflow.com/questions/41774446/kafka-bootstrap-servers-vs-zookeeper-in-kafka-console-consumer ![老外也有遇到这个问题](https://nim.nosdn.127.net/NDA3MzIzNw==/bmltYV8yMzg4MDEzNjMxXzE1NjU0NDUxOTI1NjdfMmYxMDRjYjUtZTVjNS00YjM4LWFjMzgtOWFlZTdlYWY4ZDdk) 有无人遇到一样的问题,怎么让 --bootstrap-server localhost:9092 这种也能消费到消息
这个问题怎么解决,docker搭建kafka的wen'ti
首先说明这个错误的前提,我没有自己在虚拟机上搭建,因为华为送了服务器,我就直接在它的服务器上搭建了docker,弄了三个容器装了kafka,直接使用docker-compose搭建集群  映射的端口就是这样子,但是呢,在IDEA连接kafka集群的时候 首先连接IP:5000,5002,5004 再连接返回的host.name =kafka1,kafka2,kafka3 最后继续连接advertised.host.name=kafka1,kafka2,kafka3 这样的情况,如果是普通服务器还好,直接在本地hosts添加主机IP映射即可 但是这个容器就添加不了了,容器的IP地址是内网设定的,我们本地访问ip肯定访问不到了。 20/01/16 22:11:04 INFO AppInfoParser: Kafka version: 2.4.0 20/01/16 22:11:04 INFO AppInfoParser: Kafka commitId: 77a89fcf8d7fa018 20/01/16 22:11:04 INFO AppInfoParser: Kafka startTimeMs: 1579183864167 20/01/16 22:11:04 INFO KafkaConsumer: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Subscribed to topic(s): test, topicongbo 20/01/16 22:11:04 INFO Metadata: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Cluster ID: Kkwgy0gkSkmGAlsC_5cz9A 20/01/16 22:11:04 INFO AbstractCoordinator: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Discovered group coordinator kafka3:9092 (id: 2147483644 rack: null) 20/01/16 22:11:06 WARN NetworkClient: [Consumer clientId=consumer-groupid1-1, groupId=groupid1] Error connecting to node kafka3:9092 (id: 2147483644 rack: null) java.net.UnknownHostException: kafka3 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) at java.net.InetAddress.getAllByName0(InetAddress.java:1277) at java.net.InetAddress.getAllByName(InetAddress.java:1193) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363) at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:572) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:757) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:737) at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204) at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:599) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:409) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:294) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444) at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.paranoidPoll(DirectKafkaInputDStream.scala:172) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:260) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7(DStreamGraph.scala:54) at org.apache.spark.streaming.DStreamGraph.$anonfun$start$7$adapted(DStreamGraph.scala:54) at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:145) at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:974) at scala.collection.parallel.Task.$anonfun$tryLeaf$1(Tasks.scala:53) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at scala.util.control.Breaks$$anon$1.catchBreak(Breaks.scala:67) at scala.collection.parallel.Task.tryLeaf(Tasks.scala:56) at scala.collection.parallel.Task.tryLeaf$(Tasks.scala:50) at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:971) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute(Tasks.scala:153) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask.compute$(Tasks.scala:149) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:440) at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 那么这个错误怎么解决的呢,而且华为的安全组我没有权限修改,只能5000-5010的端口对外开方
kafka数据可以写入数据,消费不可数据
kafka的offset值一直不变,可以往里面写数据 会是什么原因呢 手动改变offset的值是可以消费数据的
spring-kafka做了分区,部分分区数据可以正常消费,部分分区始终无法消费?
spring-kafka(版本:1.0.6.RELEASE,kafka-client:0.9.0.1),创建了8个分区,有一个分区的数据始终无法消费,其他分区数据正常消费。有大神知道是为什么?
kafka消费者无法消费信息
在生产环境部署kafka集群和消费者服务器后,通过logstash向kafka集群发送实时日志,消费者也能正常消费信息。但是两分钟之后消费者就停止消费信息了,想问下各位老师如何排查问题点在哪里。 1:查看了kafka服务器的日志,logstash还在向kafka推实时日志,kafka集群日志留存时间是两个小时。 2:kafka消费者一共有两台,两台都在同时运行。 3:kafka集群有三台服务器,查问题的时候发现,kafka消费者只连接到了一台broker上,不知道这是不是原因所在。
kafka消费数据老是丢失
WARN TaskSetManager: Lost task 9.0 in stage 26569.0 (TID 812602, 2, 2, 104-250-138-250.static.gorillaservers.com): k): k): ): ): kafka.common.NotLeaderForPForPForPartitionException 有两个groupID消费一个topic,出现上面的警告后,有一个groupID就消费不到数据了
python消费kafka数据,为什么前面取几次都取不到?
python 消费kafka数据时,刚开始连接时为什么取不到数据? 代码如下: ``` # -*- coding:utf8 -*- from kafka import KafkaConsumer from kafka import TopicPartition import kafka import time # 测试kafka poll方法能拉取多少的记录 consumer = KafkaConsumer( bootstrap_servers=['192.168.13.202:9092'], group_id='group-1', auto_offset_reset='earliest', enable_auto_commit=False) consumer.subscribe('test') print ("t1",time.time()) while True: print("t2", time.time()) msg = consumer.poll(timeout_ms=100, max_records=5) # 从kafka获取消息 # print (len(msg)) for i in msg.values(): for k in i: print(k.offset, k.value) time.sleep(1) ``` 打印的结果却是 ``` t1 1567669170.438951 t2 1567669170.438951 t2 1567669171.8450315 t2 1567669172.945094 t2 1567669174.0471573 t2 1567669175.1472201 0 b'{"ast":"\xe7\x82\xb"}' 1 b'{"ast":"","dm":2}' 2 b'{"ast":"12"}' 3 b'{"ast":"sd"}' 4 b'{"ast":"12ds"}' t2 1567669176.1822793 ``` 为什么连接上kafka之后,会取5次才会取到数据?
kafka 消费端 处理数据比较慢,会不会出现数据积压?
如题,kafka消费端接收到数据后 要进行部分业务逻辑操作,可能会有3秒左右,处理很慢 的话,对程序有什么影响呢?新手提问, 望各位大神不吝赐教!
kafka存储的磁盘用一半时kafka就挂了
用的是centos系统,挂载一个8T的磁盘用来存储kafka数据,每次都是用到4T左右kafka就挂了,有大神知道这是什么问题吗?
无法读取集群中kafka中的数据
使用flume将文件数据解析发送到kafka上,然后使用storm(storm运行自己写的java程序,程序中使用kafka 的consumer)读取kafka中的数据,使用zookeeper管理集群,有3个节点,报错如下: ![图片说明](https://img-ask.csdn.net/upload/201608/22/1471858935_68063.jpg) 从报错上看是其中的一个主机的kafka与zookeeper的通信有问题?不过这只是我的猜测,大牛们遇到过类似的问题吗?或者说 有什么解决问题的思路吗? 补充,就这一个主机有问题,却导致了storm无法正常运行,无法读取任何数据。
卡夫卡kafka消费问题,消费一段时间就会超时 求助原因
``` 80 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.value:{"tableName":"sw_segment","operateType":"INSERT","operateId":"4921.43.15759673490360004","indexType":"type","storageType":"elasticsearch","date":1575967330707,"tableData":{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}} 16:40:05,480 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.FormatData,tableName:sw_segment|operateId:4921.43.15759673490360004|tableMap:{trace_id=4921.43.15759673490360005, endpoint_name=/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat, latency=3, end_time=1575967349039, endpoint_id=189076, service_instance_id=4921, version=2, start_time=1575967349036, data_binary=Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm, service_id=13, time_bucket=20191210164229, is_error=0, segment_id=4921.43.15759673490360004} 16:40:05,480 DEBUG [ElasticSearchClient] Executing bulk [32] with 8 requests 16:40:05,481 DEBUG [MainClientExec] [exchange: 44] start execution 16:40:05,481 DEBUG [RequestAddCookies] CookieSpec selected: default 16:40:05,481 DEBUG [RequestAuthCache] Re-using cached 'basic' auth scheme for http://10.23.11.224:9200 16:40:05,481 DEBUG [RequestAuthCache] No credentials for preemptive authentication 16:40:05,481 DEBUG [InternalHttpAsyncClient] [exchange: 44] Request connection for {}->http://10.23.11.224:9200 16:40:05,481 DEBUG [PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:05,482 DEBUG [PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection allocated: CPoolProxy{http-outgoing-0 [ACTIVE]} 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:r]: Event set [w] 16:40:05,482 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,482 DEBUG [InternalHttpAsyncClient] Connection route already established 16:40:05,482 DEBUG [MainClientExec] [exchange: 44] Attempt 1 to execute request 16:40:05,482 DEBUG [MainClientExec] Target auth state: UNCHALLENGED 16:40:05,482 DEBUG [MainClientExec] Proxy auth state: UNCHALLENGED 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Set timeout 30000 16:40:05,482 DEBUG [headers] http-outgoing-0 >> POST /_bulk?timeout=1m HTTP/1.1 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Length: 6657 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Type: application/json 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Host: 10.23.11.224:9200 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Connection: Keep-Alive 16:40:05,482 DEBUG [headers] http-outgoing-0 >> User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221) 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Event set [w] 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 4096; completed: false] 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 4293 bytes written 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "POST /_bulk?timeout=1m HTTP/1.1[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Length: 6657[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Type: application/json[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Host: 10.23.11.224:9200[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221)[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.88.15759673496781143","endpoint_name":"/authentication","latency":71,"end_time":1575967349749,"endpoint_id":150,"service_instance_id":11,"version":2,"start_time":1575967349678,"data_binary":"CgwKCgtY1oK15O6q/xsS3gEIARivt+X37i0gurfl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6kwEKDGRiLnN0YXRlbWVudBKCAXNlbGVjdCB0LmNoZWNrX3RpbWUsdC5leHRlbmRfaW5mbyx0LnVzZXJfbmFtZSx0LmxvZ2luX2NoYW5uZWwgZnJvbSBzc29fdXNlcl9zZXNzaW9uIHQgd2hlcmUgdC50aWNrZXQgPSA/IGFuZCB0LmxvZ291dF90aW1lIGlzIG51bGwSnAEIAhjGt+X37i0g2Lfl9+4tMJUBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6UgoMZGIuc3RhdGVtZW50EkJ1cGRhdGUgc3NvX3VzZXJfc2Vzc2lvbiB0IHNldCB0LmV4dGVuZF9pbmZvID0gPyB3aGVyZSB0LnRpY2tldCA9ID8SWAgDGNm35ffuLSDrt+X37i0wlAFABVABWAFgIXoOCgdkYi50eXBlEgNzcWx6GwoLZGIuaW5zdGFuY2USDHR5Z3pwdF9kenN3anoOCgxkYi5zdGF0ZW1lbnQSZhD///////////8BGK635ffuLSD1t+X37i0wlgFYA2ABejAKA3VybBIpaHR0cDovL25zc28uZHpzd2pqYy50YXguY24vYXV0aGVudGljYXRpb256EgoLaHR0cC5tZXRob2QSA0dFVBgMIAs=","service_id":12,"time_bucket":20191210164229,"is_error":0,"segment_id":"11.88.15759673496781142"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673457660021","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":377,"end_time":1575967346143,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967345766,"data_binary":"Cg0KC7kmJPSg4dHuqv8bErYBEP///////////wEY5pjl9+4tIN+b5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164225,"is_error":0,"segment_id":"4921.36.15759673457660020"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673461450023","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":69,"end_time":1575967346214,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967346145,"data_binary":"Cg0KC7kmJKbKyNPuqv8bErYBEP///////////wEY4Zvl9+4tIKac5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164226,"is_error":0,"segment_id":"4921.36.15759673461450022"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673529984299","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":14,"end_time":1575967353012,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967352998,"data_binary":"CgwKCgslqsqf9O6q/xsSsAEQ////////////ARim0eX37i0gtNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164232,"is_error":0,"segment_id":"11.37.15759673529984298"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673530124301","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":12,"end_time":1575967353024,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967353012,"data_binary":"CgwKCgsljJCo9O6q/xsSsAEQ////////////ARi00eX37i0gwNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.37.15759673530124300"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"/v4/default/registry/mi" 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] Request completed 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 6657; completed: true] 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 2561 bytes written 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "croservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":2,"end_time":1575967353965,"endpoint_id":146,"service_instance_id":11,"version":2,"start_time":1575967353963,"data_binary":"CgwKCgsv2LPs+O6q/xsSwwEQ////////////ARjr2OX37i0g7djl9+4tMJIBQAdQAVgDYDt6EgoLaHR0cC5tZXRob2QSA1BVVHqIAQoDdXJsEoABL3Y0L2RlZmF1bHQvcmVnaXN0cnkvbWljcm9zZXJ2aWNlcy82MmFmODg0MDMxMmM0NzUwMzcwYzNlYTY0ZmQ2ODIwM2JmMDJkNTE4L2luc3RhbmNlcy8xNTYzZTNkMTFhNWYxMWVhYmQ1ODAwNTA1NmI2N2NjNC9oZWFydGJlYXQYDCAL","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539631576"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"#/v4/default/registry/microservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":0,"end_time":1575967353965,"endpoint_id":147,"service_instance_id":11,"version":2,"start_time":1575967353965,"data_binary":"CgwKCgsv+s/t+O6q/xsSwAMQ////////////ARjt2OX37i0g7djl9+4tKpoCCAESDAoKCy/Ys+z47qr/GyALOAtCgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzLzYyYWY4ODQwMzEyYzQ3NTAzNzBjM2VhNjRmZDY4MjAzYmYwMmQ1MTgvaW5zdGFuY2VzLzE1NjNlM2QxMWE1ZjExZWFiZDU4MDA1MDU2YjY3Y2M0L2hlYXJ0YmVhdFKAAS92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0OoEBIy92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0UAJYA2A7GAwgCw==","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539651578"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}[\n]" 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:w]: Event cleared [w] 16:40:06,073 DEBUG [FetchSessionHandler] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node 0 sent an incremental fetch response for session 520315326 with 0 response partition(s), 1 implied partition(s) 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 1460 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-length: 3697[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "{"took":1872,"errors":true,"items":[{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022","_version":1,"result":"created","_shards"" 16:40:07,365 DEBUG [headers] http-outgoing-0 << HTTP/1.1 200 OK 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-type: application/json; charset=UTF-8 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-length: 3697 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Response received 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response received HTTP/1.1 200 OK 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Input ready 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Consume content 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 2325 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << ":{"total":1,"successful":1,"failed":0},"_seq_no":24332,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24333,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24334,"_primary_term":1,"status":201}}]}" 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection can be kept alive indefinitely 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response processed 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] releasing connection 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection [id: http-outgoing-0][route: {}->http://10.23.11.224:9200] can be kept alive indefinitely 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,367 DEBUG [RestClient] request [POST http://10.23.11.224:9200/_bulk?timeout=1m] returned [HTTP/1.1 200 OK] 16:40:07,367 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 3697; pos: 3697; completed: true] 16:40:08,187 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:08,389 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:11,203 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:11,404 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:14,221 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:20,774 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:23,589 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:23,790 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response actCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:38,665 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:38,867 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,402 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:13,605 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node -1 disconnected. 17:13:13,606 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,707 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending metadata request (type=MetadataRequest, topics=compute_traceStorage, allowAutoCreate=true) to node 10.23.11.235:9092 (id: 0 rack: null) 17:13:13,907 DEBUG [Metadata] Updating last seen epoch from 0 to 0 for partition compute_traceStorage-0 17:13:13,907 DEBUG [Metadata] Updated cluster metadata version 4 to MetadataCache{cluster=Cluster(id = yQ_sRlMlSui8hlVtaPl4wg, nodes = [10.23.11.235:9092 (id: 0 rack: null)], partitions = [Partition(topic = compute_traceStorage, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])], controller = 10.23.11.235:9092 (id: 0 rack: null))} 17:13:16,420 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:16, 17:15:02,170 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:15:04,683 WARN [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. 17:15:04,683 INFO [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Member consumer-1-f8b4d0da-f83c-4849-8cfd-74e748aad3c7 sending LeaveGroup request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:15:04,683 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Disabling heartbeat thread ```
kafka消费者速度与什么有关
@KafkaListener(topics = {"CRBKC0002.000"}) public void sendSmsInfoByBizType(String record) { } 假设单机版的kafka,就一个节点。 1、 @KafkaListener注解接受消费者,是不是等这个方法执行完。 这个消费者进程才算消费结束。是不是一个镜像这个方法同时只能执行一次?就是不能连续起多个线程执行这个方法。 2、如果接受到参数就算消费这进程结束,也就是获取这个record消费者进程就结束了,那假设生产者一秒生产100w数据进入kafka。那这边获取参数就算消费者进程消费结束,那是不是相当于瞬间连续起100w这个方法线程执行。可是tomcat就200线程。
kafka多个消费者消费同一数据
kafka不同groupid下的消费者,消费同一topic下的某一条数据,为什么offset值不变? 只被消费了一次吗?
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
日均350000亿接入量,腾讯TubeMQ性能超过Kafka
整理 | 夕颜出品 | AI科技大本营(ID:rgznai100) 【导读】近日,腾讯开源动作不断,相继开源了分布式消息中间件TubeMQ,基于最主流的 OpenJDK8开发的
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
相关热词 c# 引用mysql c#动态加载非托管dll c# 两个表数据同步 c# 返回浮点json c# imap 链接状态 c# 漂亮字 c# 上取整 除法 c#substring c#中延时关闭 c#线段拖拉
立即提问