kafka消费数据老是丢失

WARN TaskSetManager: Lost task 9.0 in stage 26569.0 (TID 812602, 2, 2, 104-250-138-250.static.gorillaservers.com): k): k): ): ): kafka.common.NotLeaderForPForPForPartitionException
有两个groupID消费一个topic,出现上面的警告后,有一个groupID就消费不到数据了

2个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Kafka消费者组丢失未提交的消息

<div class="post-text" itemprop="text"> <p>I am using consumer group with just one consumer, just one broker ( docker wurstmeister image ). It's decided in a code to commit offset or not - if code returns error then message is not commited. I need to ensure that system does not lose any message - even if that means retrying same msg forever ( for now ;) ). For testing this I have created simple handler which does not commit offset in case of 'error' string send as message to kafka. All other strings are commited. </p> <pre><code>kafka-console-producer --broker-list localhost:9092 --topic test &gt;this will be commited </code></pre> <p>Now running </p> <pre><code>kafka-run-class kafka.admin.ConsumerGroupCommand --bootstrap-server localhost:9092 --group michalgrupa --describe </code></pre> <p>returns</p> <pre><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID test 0 13 13 0 </code></pre> <p>so thats ok, there is no lag. Now we pass 'error' string to fake that something bad happened and message is not commited:</p> <pre><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID test 0 13 14 1 </code></pre> <p>Current offset stays at right position + there is 1 lagged message. Now if we pass correct message again offset will move on to 15:</p> <p><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG test 0 15 15</code> </p> <p>and message number 14 will not be picked up ever again. Is it default behaviour? Do I need to trace last offset and load message by it+1 manually? I have set commit interval to 0 to hopefully not use any auto.commit mechanism.</p> <p>fetch/commit code:</p> <pre><code>go func() { for { ctx := context.Background() m, err := mr.brokerReader.FetchMessage(ctx) if err != nil { break } if err := msgFunc(m); err != nil { log.Errorf("# messaging # cannot commit a message: %v", err) continue } // commit message if no error if err := mr.brokerReader.CommitMessages(ctx, m); err != nil { // should we do something else to just logging not committed message? log.Errorf("cannot commit message [%s] %v/%v: %s = %s; with error: %v", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value), err) } } }() </code></pre> <p>reader configuration:</p> <pre><code>kafkaReader := kafka.NewReader(kafka.ReaderConfig{ Brokers: brokers, GroupID: groupID, Topic: topic, CommitInterval: 0, MinBytes: 10e3, MaxBytes: 10e6, }) </code></pre> <p>library used: <a href="https://github.com/segmentio/kafka-go" rel="nofollow noreferrer">https://github.com/segmentio/kafka-go</a></p> </div>

kafka 消费端 处理数据比较慢,会不会出现数据积压?

如题,kafka消费端接收到数据后 要进行部分业务逻辑操作,可能会有3秒左右,处理很慢 的话,对程序有什么影响呢?新手提问, 望各位大神不吝赐教!

kafka 消费者消费不到数据

[root@hzctc-kafka-5d61 ~]# kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group sbs-haodian-message1 --topic Message --zookeeper 10.1.5.61:2181 [2018-04-18 16:43:43,467] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$) Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/sbs-haodian-message1/offsets/Message/8. 用kafka的时候 用命令查看消费组消费情况 报这个错误 其他的消费组是正常的 哪位大神知道这是什么原因导致的 我在消费操作的时候加了缓存锁 每次poll操作之后的间隔时间不确定 可能是10S或者20S或者30S 不过我的sessiontimeiut设置了90s。这个会有什么影响吗

kafka消费者无法消费信息

在生产环境部署kafka集群和消费者服务器后,通过logstash向kafka集群发送实时日志,消费者也能正常消费信息。但是两分钟之后消费者就停止消费信息了,想问下各位老师如何排查问题点在哪里。 1:查看了kafka服务器的日志,logstash还在向kafka推实时日志,kafka集群日志留存时间是两个小时。 2:kafka消费者一共有两台,两台都在同时运行。 3:kafka集群有三台服务器,查问题的时候发现,kafka消费者只连接到了一台broker上,不知道这是不是原因所在。

kafka消费不到数据问题

kafka集群搭建正常,通过console都能正常生产和消费消息,但是通过JAVA程序就是读取不到消息,更换group都尝试过了 package test; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import kafka.consumer.Consumer; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; public class KafkaConsumer extends Thread { private String topic; public KafkaConsumer(String topic){ super(); this.topic=topic; } @Override public void run(){ //通过properties设置了Consumer的参数,并且创建了连接器,连接到Kafaka ConsumerConnector consumer = createConsumer(); //Map作用指定获取的topic以及partition Map<String,Integer> topicCountMap = new HashMap<String,Integer>(); topicCountMap.put(topic, 3); //consumer连接器获取消息 Map<String,List<KafkaStream<byte[],byte[]>>> messageStreams = consumer.createMessageStreams(topicCountMap); //获取对应的topic中的某一个partition中的数据 KafkaStream<byte[],byte[]> kafkaStream = messageStreams.get(topic).get(0); ConsumerIterator<byte[], byte[]> iterator = kafkaStream.iterator(); while(iterator.hasNext()){ byte[] message = iterator.next().message(); System.out.println("message is:"+new String(message)); } } private ConsumerConnector createConsumer(){ Properties properties = new Properties(); properties.put("zookeeper.connect", "XXX:2181"); properties.put("auto.offset.reset", "smallest");//读取旧数据 properties.put("group.id", "333fcdcd"); return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties)); } public static void main(String[] args) { new KafkaConsumer("testtest").start(); } }

kafka数据上传hbase的问题

我使用的环境是hdp的伪分布集群 我的项目是flume采集数据发送到kafka的各个topic当中 再由jar文件使得从kafka当中获取数据 发送到hbase做持久化 然后因为数据量颇大 每次传个半个小时的数据 regionserver就挂掉了 项目是肯定没问题的 因为目前在学习阶段 别人是可以执行且不报错的 问题如下所示 ``` java.io.FileNotFoundException: File /tmp/hbase-root/hbase/lib does not exist at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:431) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:674) ~[hadoop-common-2.7.3.jar!/:na] at org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:178) [hbase-common-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:142) [hbase-common-1.1.2.jar!/:1.1.2] at java.lang.Class.forName0(Native Method) [na:1.8.0_161] at java.lang.Class.forName(Class.java:348) [na:1.8.0_161] at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1543) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.protobuf.ResponseConverter.getResults(ResponseConverter.java:120) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:134) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:54) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) [hbase-client-1.1.2.jar!/:1.1.2] at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:708) [hbase-client-1.1.2.jar!/:1.1.2] ``` 他突然开始寻找 File /tmp/hbase-root/hbase/lib does not exist 这个路径的文件 我的项目中并没有从这个路径下寻找文件 我前往到这个路径 路径是空的 就是根本没有这个路径 然后我前往hbase的log中查看 hbase来了一套组合拳 ``` 2020-03-21 19:29:49,789 ERROR [Thread-19] util.PolicyRefresher: PolicyRefresher(serviceName=Sandbox_hbase): failed to refresh policies. Will continue to use last known version of policies (6) com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 8 more ``` 然后就是读取超时 ``` com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) ``` 然后就是最匪夷所思的异常 ``` 2020-03-21 19:33:36,252 ERROR [Thread-19] util.PolicyRefresher: PolicyRefresher(serviceName=Sandbox_hbase): failed to refresh policies. Will continue to use last known version of policies (6) com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503) at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:135) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:264) at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:202) at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:171) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 8 more ``` 求大佬解答

kafka 在子线程消费数据,有卡顿的情况?

2个程序消费数据,一个程序在主线程消费就没这个问题,另个一个子线程消费数据,有卡顿的情况?

kafka数据可以写入数据,消费不可数据

kafka的offset值一直不变,可以往里面写数据 会是什么原因呢 手动改变offset的值是可以消费数据的

kafka消费者速度与什么有关

@KafkaListener(topics = {"CRBKC0002.000"}) public void sendSmsInfoByBizType(String record) { } 假设单机版的kafka,就一个节点。 1、 @KafkaListener注解接受消费者,是不是等这个方法执行完。 这个消费者进程才算消费结束。是不是一个镜像这个方法同时只能执行一次?就是不能连续起多个线程执行这个方法。 2、如果接受到参数就算消费这进程结束,也就是获取这个record消费者进程就结束了,那假设生产者一秒生产100w数据进入kafka。那这边获取参数就算消费者进程消费结束,那是不是相当于瞬间连续起100w这个方法线程执行。可是tomcat就200线程。

python消费kafka数据,为什么前面取几次都取不到?

python 消费kafka数据时,刚开始连接时为什么取不到数据? 代码如下: ``` # -*- coding:utf8 -*- from kafka import KafkaConsumer from kafka import TopicPartition import kafka import time # 测试kafka poll方法能拉取多少的记录 consumer = KafkaConsumer( bootstrap_servers=['192.168.13.202:9092'], group_id='group-1', auto_offset_reset='earliest', enable_auto_commit=False) consumer.subscribe('test') print ("t1",time.time()) while True: print("t2", time.time()) msg = consumer.poll(timeout_ms=100, max_records=5) # 从kafka获取消息 # print (len(msg)) for i in msg.values(): for k in i: print(k.offset, k.value) time.sleep(1) ``` 打印的结果却是 ``` t1 1567669170.438951 t2 1567669170.438951 t2 1567669171.8450315 t2 1567669172.945094 t2 1567669174.0471573 t2 1567669175.1472201 0 b'{"ast":"\xe7\x82\xb"}' 1 b'{"ast":"","dm":2}' 2 b'{"ast":"12"}' 3 b'{"ast":"sd"}' 4 b'{"ast":"12ds"}' t2 1567669176.1822793 ``` 为什么连接上kafka之后,会取5次才会取到数据?

spring boot 1.5集成 kafka 消费者怎么自己确认消费

spring boot 1.5集成 kafka 消费者怎么自己确认消费 怎么使用@KafkaListener注解实现Acknowledgment,即消费者怎么自己提交游标

kafkaconsumer数据不能批量消费

我用kafkaconsumer批量消费数据, 无法获取获取批量, 从日志看offset提交也异常, 规律性的每3次提交成功一次, 数据每次获取一条, 无法批量获取。 困扰了两天了, 莫名啊。 ``` public class KafkaManualConsumer { public static void main(String[] args) { Properties properties = new Properties(); System.setProperty("java.security.auth.login.config", "c:/kafka_client_jaas.conf"); //配置文件路径 properties.put("security.protocol", "SASL_PLAINTEXT"); properties.put("sasl.mechanism", "PLAIN"); properties.put("bootstrap.servers", "VM_0_16_centos:9092"); //kafka:9092 properties.put("enable.auto.commit", "false"); //properties.put("session.timeout.ms", 60000); properties.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, 60000); properties.put("fetch.max.wait.ms", 5000); properties.put("max.poll.records", 5000); properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); properties.put("group.id", "yuu67u36"); // properties.put("receive.buffer.bytes", 3276800); // properties.put("heartbeat.interval.ms", 59000); // properties.put("client.id", "t4t5t234f34f3f"); // properties.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 32*1024*1024); // properties.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 64*1024*1024); // properties.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG, 128*1024*1024); //properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // properties.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 2000*1024); KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties); kafkaConsumer.subscribe(Arrays.asList("topic-video-dev-attendphotos")); //kafkaConsumer.subscribe(Arrays.asList("topic-video-dev-stat")); while (true) { ConsumerRecords<String, String> records = kafkaConsumer.poll(1000L); System.out.println("-----------------"); System.out.println(records.count()); for (ConsumerRecord<String, String> record : records) { System.out.println("offset = " + record.offset()); VideoPhotoOuter dto = JSON.parseObject(record.value(), VideoPhotoOuter.class); System.out.println(dto.getPhotos().get(0).getPhotoFmt()); //System.out.printf("offset = %d, value = %s", record.offset(), record.value()); } try { kafkaConsumer.commitSync(); Thread.currentThread().sleep(1000L); } catch(Exception ex) { //手动抛出SQLException使用事务回滚 } } //kafkaConsumer.close(); } } ``` 下面是控制台日志 ``` 17:32:06.758 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [VM_0_16_centos:9092] check.crcs = true client.dns.lookup = default client.id = t4t5t234f34f3f client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 5000 fetch.min.bytes = 1 group.id = yuu67u36 group.instance.id = null heartbeat.interval.ms = 59000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 5000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 3276800 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 60000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = PLAIN security.protocol = SASL_PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 60000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer ....................... ........................ ----------------- 0 17:32:08.068 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90261 for partition topic-video-dev-attendphotos-0 ----------------- 0 17:32:10.095 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90261 for partition topic-video-dev-attendphotos-0 17:32:12.091 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Node 90 sent a full fetch response that created a new incremental fetch session 725149318 with 1 response partition(s) 17:32:12.092 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Fetch READ_UNCOMMITTED at offset 90261 for partition topic-video-dev-attendphotos-0 returned fetch data (error=NONE, highWaterMark=93666, lastStableOffset = 93666, logStartOffset = 5372, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576) 17:32:12.120 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topic-video-dev-attendphotos.bytes-fetched 17:32:12.120 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topic-video-dev-attendphotos.records-fetched 17:32:12.121 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic-video-dev-attendphotos-0.records-lag 17:32:12.121 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic-video-dev-attendphotos-0.records-lead 17:32:12.122 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Added READ_UNCOMMITTED fetch request for partition topic-video-dev-attendphotos-0 at position FetchPosition{offset=90262, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=VM_0_16_centos:9092 (id: 90 rack: null), epoch=0}} to node VM_0_16_centos:9092 (id: 90 rack: null) 17:32:12.122 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Built incremental fetch (sessionId=725149318, epoch=1) for node 90. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 1 partition(s) 17:32:12.122 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(topic-video-dev-attendphotos-0), toForget=(), implied=()) to broker VM_0_16_centos:9092 (id: 90 rack: null) ----------------- 1 offset = 90261 JPG 17:32:12.239 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90262 for partition topic-video-dev-attendphotos-0 ----------------- 0 17:32:14.256 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90262 for partition topic-video-dev-attendphotos-0 ----------------- 0 17:32:16.279 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90262 for partition topic-video-dev-attendphotos-0 17:32:16.603 [kafka-coordinator-heartbeat-thread | yuu67u36] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Node 90 sent an incremental fetch response for session 725149318 with 1 response partition(s) 17:32:16.603 [kafka-coordinator-heartbeat-thread | yuu67u36] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Fetch READ_UNCOMMITTED at offset 90262 for partition topic-video-dev-attendphotos-0 returned fetch data (error=NONE, highWaterMark=93668, lastStableOffset = 93668, logStartOffset = 5372, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576) 17:32:17.280 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Added READ_UNCOMMITTED fetch request for partition topic-video-dev-attendphotos-0 at position FetchPosition{offset=90263, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=VM_0_16_centos:9092 (id: 90 rack: null), epoch=0}} to node VM_0_16_centos:9092 (id: 90 rack: null) 17:32:17.281 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Built incremental fetch (sessionId=725149318, epoch=2) for node 90. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 1 partition(s) 17:32:17.281 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(topic-video-dev-attendphotos-0), toForget=(), implied=()) to broker VM_0_16_centos:9092 (id: 90 rack: null) ----------------- 1 ```

kafka多个消费者消费同一数据

kafka不同groupid下的消费者,消费同一topic下的某一条数据,为什么offset值不变? 只被消费了一次吗?

spark读取kafka数据, 缓存当天数据

spark stream从kafka读取数据,10秒间隔;需要缓存当天数据用于业务分析。 思路1:定义static rdd用于union每次接收到的rdd;用window窗口(窗口长1小时,滑动步长20分钟);union之后checkpoint。 但是发现在利用static rdd做业务分析的时候,应该是因为磁盘io,所以执行时间太长。 思路2:一样定义static rdd, context调用remember(24小时)保留数据24小时(数据缓存在哪里了,暂时不清楚,汗);但是在业务分析时,发现static rdd的count结果为0 求教怎么缓存一段时间的rdd 数据放executor内存或分布放个worker都可以,一天的数据量大概在100g,过滤后再5g,机器内存256g

kafka通过consumer java api实现消费者,KafkaStream打印不出来数据

kafka2.2.0 通过consumer java api实现消费者,KafkaStream打印不出来数据 ``` package kafka; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import kafka.consumer.Consumer; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; public class KafkaConsumerTest extends Thread { //在linux环境运行正常 @Override public void run() { // TODO Auto-generated method stub String topic="powerTopic"; Properties pro=new Properties(); pro.put("zookeeper.connect", "10.2.2.61:2181,10.2.2.62:2181,10.2.2.63:2181"); pro.put("group.id", "test"); // pro.put("zookeeper.session.timeout.ms", "4000"); // pro.put("consumer.timeout.ms", "-1"); ConsumerConfig paramConsumerConfig=new ConsumerConfig(pro); ConsumerConnector cosumerConnector=Consumer.createJavaConsumerConnector(paramConsumerConfig); Map<String, Integer> paramMap=new HashMap<String, Integer>(); paramMap.put(topic,1); Map<String, List<KafkaStream<byte[], byte[]>>> messageStream=cosumerConnector.createMessageStreams(paramMap); KafkaStream<byte[], byte[]> kafkastream=messageStream.get(topic).get(0); // System.out.println(kafkastream.size()); System.out.println("hello"); ConsumerIterator<byte[], byte[]> iterator=kafkastream.iterator(); while(iterator.hasNext()){ // MessageAndMetadata<byte[], byte[]> message=iterator.next(); // String topic1=message.topic(); String msg=new String(iterator.next().message()); System.out.println(msg); } } public static void main(String[] args) { // TODO Auto-generated method stub new KafkaConsumerTest().start(); new MyProducer01().start(); } } ``` kafka环境在centos操作系统,在windows系统的eclipse运行程序,打印不出来数据,也不结束报错: ![图片说明](https://img-ask.csdn.net/upload/201912/20/1576807897_599340.jpg) 打包后在集群环境运行结果: ![图片说明](https://img-ask.csdn.net/upload/201912/20/1576808400_74063.jpg)

卡夫卡kafka消费问题,消费一段时间就会超时 求助原因

``` 80 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.value:{"tableName":"sw_segment","operateType":"INSERT","operateId":"4921.43.15759673490360004","indexType":"type","storageType":"elasticsearch","date":1575967330707,"tableData":{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}} 16:40:05,480 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.FormatData,tableName:sw_segment|operateId:4921.43.15759673490360004|tableMap:{trace_id=4921.43.15759673490360005, endpoint_name=/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat, latency=3, end_time=1575967349039, endpoint_id=189076, service_instance_id=4921, version=2, start_time=1575967349036, data_binary=Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm, service_id=13, time_bucket=20191210164229, is_error=0, segment_id=4921.43.15759673490360004} 16:40:05,480 DEBUG [ElasticSearchClient] Executing bulk [32] with 8 requests 16:40:05,481 DEBUG [MainClientExec] [exchange: 44] start execution 16:40:05,481 DEBUG [RequestAddCookies] CookieSpec selected: default 16:40:05,481 DEBUG [RequestAuthCache] Re-using cached 'basic' auth scheme for http://10.23.11.224:9200 16:40:05,481 DEBUG [RequestAuthCache] No credentials for preemptive authentication 16:40:05,481 DEBUG [InternalHttpAsyncClient] [exchange: 44] Request connection for {}->http://10.23.11.224:9200 16:40:05,481 DEBUG [PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:05,482 DEBUG [PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection allocated: CPoolProxy{http-outgoing-0 [ACTIVE]} 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:r]: Event set [w] 16:40:05,482 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,482 DEBUG [InternalHttpAsyncClient] Connection route already established 16:40:05,482 DEBUG [MainClientExec] [exchange: 44] Attempt 1 to execute request 16:40:05,482 DEBUG [MainClientExec] Target auth state: UNCHALLENGED 16:40:05,482 DEBUG [MainClientExec] Proxy auth state: UNCHALLENGED 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Set timeout 30000 16:40:05,482 DEBUG [headers] http-outgoing-0 >> POST /_bulk?timeout=1m HTTP/1.1 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Length: 6657 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Type: application/json 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Host: 10.23.11.224:9200 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Connection: Keep-Alive 16:40:05,482 DEBUG [headers] http-outgoing-0 >> User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221) 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Event set [w] 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 4096; completed: false] 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 4293 bytes written 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "POST /_bulk?timeout=1m HTTP/1.1[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Length: 6657[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Type: application/json[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Host: 10.23.11.224:9200[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221)[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.88.15759673496781143","endpoint_name":"/authentication","latency":71,"end_time":1575967349749,"endpoint_id":150,"service_instance_id":11,"version":2,"start_time":1575967349678,"data_binary":"CgwKCgtY1oK15O6q/xsS3gEIARivt+X37i0gurfl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6kwEKDGRiLnN0YXRlbWVudBKCAXNlbGVjdCB0LmNoZWNrX3RpbWUsdC5leHRlbmRfaW5mbyx0LnVzZXJfbmFtZSx0LmxvZ2luX2NoYW5uZWwgZnJvbSBzc29fdXNlcl9zZXNzaW9uIHQgd2hlcmUgdC50aWNrZXQgPSA/IGFuZCB0LmxvZ291dF90aW1lIGlzIG51bGwSnAEIAhjGt+X37i0g2Lfl9+4tMJUBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6UgoMZGIuc3RhdGVtZW50EkJ1cGRhdGUgc3NvX3VzZXJfc2Vzc2lvbiB0IHNldCB0LmV4dGVuZF9pbmZvID0gPyB3aGVyZSB0LnRpY2tldCA9ID8SWAgDGNm35ffuLSDrt+X37i0wlAFABVABWAFgIXoOCgdkYi50eXBlEgNzcWx6GwoLZGIuaW5zdGFuY2USDHR5Z3pwdF9kenN3anoOCgxkYi5zdGF0ZW1lbnQSZhD///////////8BGK635ffuLSD1t+X37i0wlgFYA2ABejAKA3VybBIpaHR0cDovL25zc28uZHpzd2pqYy50YXguY24vYXV0aGVudGljYXRpb256EgoLaHR0cC5tZXRob2QSA0dFVBgMIAs=","service_id":12,"time_bucket":20191210164229,"is_error":0,"segment_id":"11.88.15759673496781142"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673457660021","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":377,"end_time":1575967346143,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967345766,"data_binary":"Cg0KC7kmJPSg4dHuqv8bErYBEP///////////wEY5pjl9+4tIN+b5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164225,"is_error":0,"segment_id":"4921.36.15759673457660020"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673461450023","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":69,"end_time":1575967346214,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967346145,"data_binary":"Cg0KC7kmJKbKyNPuqv8bErYBEP///////////wEY4Zvl9+4tIKac5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164226,"is_error":0,"segment_id":"4921.36.15759673461450022"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673529984299","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":14,"end_time":1575967353012,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967352998,"data_binary":"CgwKCgslqsqf9O6q/xsSsAEQ////////////ARim0eX37i0gtNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164232,"is_error":0,"segment_id":"11.37.15759673529984298"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673530124301","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":12,"end_time":1575967353024,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967353012,"data_binary":"CgwKCgsljJCo9O6q/xsSsAEQ////////////ARi00eX37i0gwNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.37.15759673530124300"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"/v4/default/registry/mi" 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] Request completed 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 6657; completed: true] 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 2561 bytes written 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "croservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":2,"end_time":1575967353965,"endpoint_id":146,"service_instance_id":11,"version":2,"start_time":1575967353963,"data_binary":"CgwKCgsv2LPs+O6q/xsSwwEQ////////////ARjr2OX37i0g7djl9+4tMJIBQAdQAVgDYDt6EgoLaHR0cC5tZXRob2QSA1BVVHqIAQoDdXJsEoABL3Y0L2RlZmF1bHQvcmVnaXN0cnkvbWljcm9zZXJ2aWNlcy82MmFmODg0MDMxMmM0NzUwMzcwYzNlYTY0ZmQ2ODIwM2JmMDJkNTE4L2luc3RhbmNlcy8xNTYzZTNkMTFhNWYxMWVhYmQ1ODAwNTA1NmI2N2NjNC9oZWFydGJlYXQYDCAL","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539631576"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"#/v4/default/registry/microservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":0,"end_time":1575967353965,"endpoint_id":147,"service_instance_id":11,"version":2,"start_time":1575967353965,"data_binary":"CgwKCgsv+s/t+O6q/xsSwAMQ////////////ARjt2OX37i0g7djl9+4tKpoCCAESDAoKCy/Ys+z47qr/GyALOAtCgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzLzYyYWY4ODQwMzEyYzQ3NTAzNzBjM2VhNjRmZDY4MjAzYmYwMmQ1MTgvaW5zdGFuY2VzLzE1NjNlM2QxMWE1ZjExZWFiZDU4MDA1MDU2YjY3Y2M0L2hlYXJ0YmVhdFKAAS92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0OoEBIy92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0UAJYA2A7GAwgCw==","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539651578"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}[\n]" 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:w]: Event cleared [w] 16:40:06,073 DEBUG [FetchSessionHandler] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node 0 sent an incremental fetch response for session 520315326 with 0 response partition(s), 1 implied partition(s) 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 1460 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-length: 3697[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "{"took":1872,"errors":true,"items":[{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022","_version":1,"result":"created","_shards"" 16:40:07,365 DEBUG [headers] http-outgoing-0 << HTTP/1.1 200 OK 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-type: application/json; charset=UTF-8 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-length: 3697 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Response received 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response received HTTP/1.1 200 OK 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Input ready 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Consume content 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 2325 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << ":{"total":1,"successful":1,"failed":0},"_seq_no":24332,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24333,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24334,"_primary_term":1,"status":201}}]}" 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection can be kept alive indefinitely 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response processed 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] releasing connection 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection [id: http-outgoing-0][route: {}->http://10.23.11.224:9200] can be kept alive indefinitely 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,367 DEBUG [RestClient] request [POST http://10.23.11.224:9200/_bulk?timeout=1m] returned [HTTP/1.1 200 OK] 16:40:07,367 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 3697; pos: 3697; completed: true] 16:40:08,187 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:08,389 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:11,203 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:11,404 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:14,221 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:20,774 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:23,589 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:23,790 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response actCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:38,665 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:38,867 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,402 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:13,605 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node -1 disconnected. 17:13:13,606 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,707 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending metadata request (type=MetadataRequest, topics=compute_traceStorage, allowAutoCreate=true) to node 10.23.11.235:9092 (id: 0 rack: null) 17:13:13,907 DEBUG [Metadata] Updating last seen epoch from 0 to 0 for partition compute_traceStorage-0 17:13:13,907 DEBUG [Metadata] Updated cluster metadata version 4 to MetadataCache{cluster=Cluster(id = yQ_sRlMlSui8hlVtaPl4wg, nodes = [10.23.11.235:9092 (id: 0 rack: null)], partitions = [Partition(topic = compute_traceStorage, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])], controller = 10.23.11.235:9092 (id: 0 rack: null))} 17:13:16,420 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:16, 17:15:02,170 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:15:04,683 WARN [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. 17:15:04,683 INFO [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Member consumer-1-f8b4d0da-f83c-4849-8cfd-74e748aad3c7 sending LeaveGroup request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:15:04,683 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Disabling heartbeat thread ```

kafka的数据存放,想做配置的修改

kafka的默认存放时间为7天,因为在做测试,所以想让他保留的时间长一点。修改了 log.retention.hours=1440 log.retention.bytes=1073741824 这个两个配置。但是今天在做测试的时候,发现kafka 的数据没有啦。只有四条数据啦。我就想知道kafka数据怎么就没有啦,是因为什么没有的。 望大神多多指教!

【求助】structed streaming 在消费kafka数据时,怎么保证数据完整&有且仅有1次被消费

看了下官方文档,原本streaming在使用direct模式时,可以自己维护offset,感觉上还比较靠谱。 现在structed streaming 使用kafka时,enable.auto.commit 是不可设置的,按照文档说的是,structed streaming 不提交任何offset, 那spark在新版本的消费kafka中,如何保证有且仅有一次,或者是至少被消费一次。

kafka消费者处理慢的情况下如何提高消息处理速度?不允许增加分区

如题.最近的一个面试题,说是考虑kafka理论特性.具体要求我可能有理解错误.如果各位有研究一眼看出是什么问题,谢谢给个提示. 我搜索了下,提高消费性能的有: 增加分区个数(增加消费者并行数)[不允许]; 消费者使用多线程;如果消息处理是CPU密集的加多线程也没用啊; 或许我理解有问题? 换个问题? 生产者1秒生成1W消息.然而此时全部消费者1s只能消费5000,消息处理是纯CPU计算,问:在不添加分区的情况下如何消息处理速度?

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

抖音上很火的时钟效果

反正,我的抖音没人看,别人都有几十万个赞什么的。 发到CSDN上来,大家交流下~ 主要用到原生态的 JS+CSS3。 具体不解释了,看注释: &lt;!DOCTYPE html&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;meta charset="UTF-8"&gt; &lt;title&gt;Title&lt;/tit...

记录下入职中软一个月(外包华为)

我在年前从上一家公司离职,没想到过年期间疫情爆发,我也被困在家里,在家呆着的日子让人很焦躁,于是我疯狂的投简历,看面试题,希望可以进大公司去看看。 我也有幸面试了我觉得还挺大的公司的(虽然不是bat之类的大厂,但是作为一名二本计算机专业刚毕业的大学生bat那些大厂我连投简历的勇气都没有),最后选择了中软,我知道这是一家外包公司,待遇各方面甚至不如我的上一家公司,但是对我而言这可是外包华为,能...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

手机经常收到"回复TD退订",回还是不回?今天总算是弄清楚了

自从有了微信和QQ,手机短信几乎很少再用了,但是我们手机里面还是经常会收到"回复TD退订"的消息,那到底要不要回复呢?今天就来告诉大家! 信息内容可能包括 推销信息 品牌活动日的时候,会根据你的用户浏览信息,或者购买记录,后续发送一些降价消息。 但是笔者想说我是缺那10块钱的人嘛,我缺的是1000块。 垃圾信息 虽然我们已经不经常用短信功能,但是还是有不少...

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试官问我:如何加载100M的图片却不撑爆内存

还记得当年面试一个面试官问我怎么加载巨图才能不撑爆内存,我没回答上来,他说分片显示,我寻思特么分片能减少内存使用??现在可以打他脸了! 内容扩展 1.图片的三级缓存中,图片加载到内存中,如果内存快爆了,会发生什么?怎么处理? 2.内存中如果加载一张 500*500 的 png 高清图片.应该是占用多少的内存? 3.Bitmap 如何处理大图,如一张 30M 的大图,如何预防 OOM? A...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

又出事了?网站被攻击了?高中生?

北京时间2020年3月27日9点整,如往常一样来到公司,带开电脑,正准备打开Github网站看一会源代码,再开始手头的工作。哟吼,一直打不开,一直出现如下页面: 我想很多网友也尝到了甜头,各大技术群炸开了锅,据网友反馈有攻击者正在发起大规模的中间人挟持,京东和Github等网站等网站都受到了影响。 什么是中间中间人挟持呢? 简而言之,就是攻击者在数据网络传输的过程中,截获传输过程中的数据并篡改...

培训班出来的人后来都怎么样了?(二)

接着上回说,培训班学习生涯结束了。后面每天就是无休止的背面试题,不是没有头脑的背,培训公司还是有方法的,现在回想当时背的面试题好像都用上了,也被问到了。回头找找面试题,当时都是打印下来天天看,天天背。 不理解呢也要背,面试造飞机,上班拧螺丝。班里的同学开始四处投简历面试了,很快就有面试成功的,刚开始一个,然后越来越多。不知道是什么原因,尝到胜利果实的童鞋,不满足于自己通过的公司,嫌薪水要少了,选择...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

6年开发经验女程序员,面试京东Java岗要求薪资28K

写在开头: 上周面试了一位女程序员,上午10::30来我们部门面试,2B哥接待了她.来看看她的简历: 个人简历 个人技能: ● 熟悉spring mvc 、spring、mybatis 等框架 ● 熟悉 redis 、rocketmq、dubbo、zookeeper、netty 、nginx、tomcat、mysql。 ● 阅读过juc 中的线程池、锁的源...

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

工作八年,月薪60K,裸辞两个月,投简历投到怀疑人生!

近日,有网友在某职场社交平台吐槽,自己裸辞两个月了,但是找工作却让自己的心态都要崩溃了,全部无果,不是已查看无回音,就是已查看不符合。 “工作八年,两年一跳,裸辞两个月了,之前月薪60K,最近找工作找的心态崩了!所有招聘工具都用了,全部无果,不是已查看无回音,就是已查看不符合。进头条,滴滴之类的大厂很难吗???!!!投简历投的开始怀疑人生了!希望 可以收到大厂offer” 先来看看网...

推荐9个能让你看一天的网站

分享的这9个保证另你意外的网站,每个都非常实用!非常干货!毫不客气的说,这些网站最少值10万块钱。 利用好这些网站,会让你各方面的技能都得到成长,不说让你走上人生巅峰,但对比现在的你,在眼界、学识、技能方面都有质的飞跃。 一、AIRPANO 传送门:https://www.airpano.com/360photo_list.php 这是一个可以躺在家里,就能环游世界的神奇网站。 世界那么大,绝大多...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

月薪22K程序员,打卡迟到10次,收到工资短信一脸懵逼

每家公司为了保证公司员工每天的工作时间,一般都会采用上下班打卡的工作制度,这其实是一个很常见的是,本身也没有什么问题的。正所谓无规矩不成方圆,公司肯定是有公司的规矩,虽然每个员工都很不喜欢这些规矩来束缚我们,但是公司也只是为了能更好的管理员工。但是一家公司如果一成不变的使用打卡制度,而不会去变通管理,也真不一定是好事。 打卡制度特别对于销售部门来说,不但会让公司发展不起来,还很容易丢失员工。但如...

97年世界黑客编程大赛冠军作品(大小仅为16KB),惊艳世界的编程巨作

这是世界编程大赛第一名作品(97年Mekka ’97 4K Intro比赛)汇编语言所写。 整个文件只有4095个字节, 大小仅仅为16KB! 不仅实现了3D动画的效果!还有一段震撼人心的背景音乐!!! 内容无法以言语形容,实在太强大! 下面是代码,具体操作看最后! @echo off more +1 %~s0|debug e100 33 f6 bf 0 20 b5 10 f3 a5...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

什么是a站、b站、c站、d站、e站、f站、g站、h站、i站、j站、k站、l站、m站、n站?00后的世界我不懂!

A站 AcFun弹幕视频网,简称“A站”,成立于2007年6月,取意于Anime Comic Fun,是中国大陆第一家弹幕视频网站。A站以视频为载体,逐步发展出基于原生内容二次创作的完整生态,拥有高质量互动弹幕,是中国弹幕文化的发源地;拥有大量超粘性的用户群体,产生输出了金坷垃、鬼畜全明星、我的滑板鞋、小苹果等大量网络流行文化,也是中国二次元文化的发源地。 B站 全称“哔哩哔哩(bilibili...

我真的错了,我被跳槽后的高薪冲昏了头脑...

国内疫情已接近尾声,疫情对生活各个方面造成的影响,就是一场真实的“蝴蝶效应”。“全球最大安全套制造商因疫情停产,已造成一亿个安全套缺口”“口罩印钞机,月入千万的暴富神话”“百程旅行...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

大厂的 404 页面都长啥样?最后一个笑了...

每天浏览各大网站,难免会碰到404页面啊。你注意过404页面么?猿妹搜罗来了下面这些知名网站的404页面,以供大家欣赏,看看哪个网站更有创意: 正在上传…重新上传取消 腾讯 正在上传…重新上传取消 网易 淘宝 百度 新浪微博 正在上传…重新上传取消 新浪 京东 优酷 腾讯视频 搜...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

爬虫(101)爬点重口味的

小弟最近在学校无聊的很哪,浏览网页突然看到一张图片,都快流鼻血。。。然后小弟冥思苦想,得干一点有趣的事情python 爬虫库安装https://s.taobao.com/api?_ks...

相关热词 c# 局部 截图 页面 c#实现简单的文件管理器 c# where c# 取文件夹路径 c# 对比 当天 c# fir 滤波器 c# 和站 队列 c# txt 去空格 c#移除其他类事件 c# 自动截屏
立即提问