Kafka 输出上百G的日志Platform.log

莫名其妙输出日志
0918 150001 867 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.consumer.ConsumerFetcherManager: [ConsumerFetcherManager-1474179379371] Added fetcher for partitions ArrayBuffer()
0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Verifying properties
0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Property client.id is overridden to operationplat
0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Property metadata.broker.list is overridden to
0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Property request.timeout.ms is overridden to 30000
0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] WARN kafka.consumer.ConsumerFetcherManager$LeaderFinderThread: [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread], Failed to find leader for Set([operAsynDown,0])
kafka.common.KafkaException: fetching topic metadata for topics [Set(operAsynDown)] from broker [ArrayBuffer()] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

这个日志可以关闭或者如何避免这种异常发生?求解!

1个回答

qq_33421256
qq_33421256 你是真的皮
大约 2 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
flink集成kafka做数据分流报错org.apache.kafka.common.serialization.ByteArraySerializer is not an instance of org.apache.kafka.common.serialization.Serializer

flink集成kafka做数据分流时出现了很诡异的异常,有谁遇到过吗? org.apache.kafka.common.serialization.ByteArraySerializer is not an instance of org.apache.kafka.common.serialization.Serializer 分流后会有2个sink写入两个topic,这个异常是偶发,但是每次发生就经常触发。。。 ``` org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to construct kafka producer at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:593) at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677) at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735) at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.abortTransactions(FlinkKafkaProducer.java:1099) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initializeState(FlinkKafkaProducer.java:1036) at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178) at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96) at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:281) at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:901) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:415) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:430) at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298) at org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.<init>(FlinkKafkaInternalProducer.java:76) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.lambda$abortTransactions$2(FlinkKafkaProducer.java:1107) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.serialization.ByteArraySerializer is not an instance of org.apache.kafka.common.serialization.Serializer at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstance(AbstractConfig.java:304) at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:360) ... 12 more ```

kafka启动报错,java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V

试过很多方法,降级zk使其和kafka依赖的版本保持一致; zk3.414 ,kafka2.3 删除了scala的环境变量,依然不行; java_home只有一个 java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:238) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$2(ZooKeeperClient.scala:160) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253) at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1(ZooKeeperClient.scala:160) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1$adapted(ZooKeeperClient.scala:156) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:156) at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1660) at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1647) at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1642) at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1712) at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689) at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97) at kafka.server.KafkaServer.startup(KafkaServer.scala:262) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)

flume采集kafka报错怎么解决

报错信息: Source.java:120)] Event #: 0 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 965 2018-11-23 17:59:18,995 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 975 2018-11-23 17:59:19,005 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 985 2018-11-23 17:59:19,015 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 995 2018-11-23 17:59:19,025 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:119)] Waited: 1006 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [DEBUG - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:120)] Event #: 0 2018-11-23 17:59:19,036 (PollableSourceRunner-KafkaSource-kaSource) [ERROR - org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:153)] KafkaSource EXCEPTION, {} java.lang.NullPointerException at org.apache.flume.instrumentation.MonitoredCounterGroup.increment(MonitoredCounterGroup.java:261) at org.apache.flume.instrumentation.kafka.KafkaSourceCounter.incrementKafkaEmptyCount(KafkaSourceCounter.java:49) at org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:146) at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:139) at java.lang.Thread.run(Thread.java:748) -------------------------------------------- 配置文件 kafkaLogger.sources = kaSource kafkaLogger.channels = memoryChannel kafkaLogger.sinks = kaSink # The channel can be defined as follows. kafkaLogger.sources.kaSource.channels = memoryChannel kafkaLogger.sources.kaSource.type= org.apache.flume.source.kafka.KafkaSource kafkaLogger.sources.kaSource.zookeeperConnect=192.168.130.4:2181,192.168.130.5:2181,192.168.130.6:2181 kafkaLogger.sources.kaSource.topic=dwd-topic kafkaLogger.sources.kaSource.groupId = 0 kafkaLogger.channels.memoryChannel.type=memory kafkaLogger.channels.memoryChannel.capacity = 1000 kafkaLogger.channels.memoryChannel.keep-alive = 60 kafkaLogger.sinks.kaSink.type = elasticsearch kafkaLogger.sinks.kaSink.hostNames = 192.168.130.6:9300 kafkaLogger.sinks.kaSink.indexName = flume_mq_es_d kafkaLogger.sinks.kaSink.indexType = flume_mq_es kafkaLogger.sinks.kaSink.clusterName = zyuc-elasticsearch kafkaLogger.sinks.kaSink.batchSize = 100 kafkaLogger.sinks.kaSink.client = transport kafkaLogger.sinks.kaSink.serializer = com.commons.flume.sink.elasticsearch.CommonElasticSearchIndexRequestBuilderFactory kafkaLogger.sinks.kaSink.serializer.parse = com.commons.log.parser.LogTextParser kafkaLogger.sinks.kaSink.serializer.formatPattern = yyyyMMdd kafkaLogger.sinks.kaSink.serializer.dateFieldName = time kafkaLogger.sinks.kaSink.channel = memoryChannel

Kafka 运行一段时间就停了

报错: ERROR Error while deleting segments for MenuChangedEvent.domain.FZ-water-0 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel) java.nio.file.NoSuchFileException: /tmp/kafka-logs/MenuChangedEvent.domain.FZ-water-0/00000000000000000000.log 。。。。 [2019-05-07 11:03:48,234] ERROR Shutdown broker because all log dirs in /tmp/kafka-logs have failed (kafka.log.LogManager) 注意: /etc/cron.daily下没有temwatch 怎么解决?一段时间就shutdown,重启了就可以运行一段时间

windows环境下kafka创建topic后重启就报错

[2017-12-20 17:20:15,475] WARN Found a corrupted index file due to requirement failed: Corrupt index found, index file (D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index) has non-zero size but the last offset is 0 which is no larger than the base offset 0.}. deleting D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex, D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.index, and D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.txnindex and rebuilding index... (kafka.log.Log) [2017-12-20 17:20:15,475] ERROR Error while loading log dir D:\program\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager) java.nio.file.FileSystemException: D:\program\kafka_2.12-1.0.0\kafka-logs\linlin-0\00000000000000000000.timeindex: 另一 个程序正在使用此文件,进程无法访问。 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269) at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108) at java.nio.file.Files.deleteIfExists(Files.java:1165) at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:335) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:191) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788) at kafka.log.Log.loadSegmentFiles(Log.scala:297) at kafka.log.Log.loadSegments(Log.scala:406) at kafka.log.Log.<init>(Log.scala:203) at kafka.log.Log$.apply(Log.scala:1735) at kafka.log.LogManager.loadLog(LogManager.scala:231) at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:292) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

kafka启动的问题...................

![图片说明](https://img-ask.csdn.net/upload/201704/14/1492133543_551349.png)

kafka.common.KafkaException:

package com; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import kafka.serializer.StringEncoder; public class kafkaProducer extends Thread{ private String topic; public kafkaProducer(String topic){ super(); this.topic = topic; } @Override public void run() { Producer producer = createProducer(); int i=0; while(true){ producer.send(new KeyedMessage<Integer, String>(topic, "message: " + i++)); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } } private Producer createProducer() { Properties properties = new Properties(); properties.put("zookeeper.connect", "localhost:2181");//声明zk properties.put("serializer.class", StringEncoder.class.getName()); properties.put("metadata.broker.list", "localhost:9092");// 声明kafka broker return new Producer<Integer, String>(new ProducerConfig(properties)); } public static void main(String[] args) { new kafkaProducer("test").start();// 使用kafka集群中创建好的主题 test } } kafka.common.KafkaException: fetching topic metadata for topics [Set(test)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72) at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67) at kafka.utils.Utils$.swallow(Utils.scala:172) at kafka.utils.Logging$class.swallowError(Logging.scala:106) at kafka.utils.Utils$.swallowError(Utils.scala:45) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.kafkaProducer.run(kafkaProducer.java:29) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) at kafka.producer.SyncProducer.send(SyncProducer.scala:113) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) ... 9 more ``` ```

springboot kafka 怎么配置 max.request.size

项目使用 springboot + kafka,有一个消息的内容比较大,出现了下面的异常 The message is 1330537 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. 但是找了半天,不知道这个到底怎么配置,追踪了下源码也没看明白。哪位大神指导下? 其他的配置在 application.properties 中配置如下, spring.kafka.bootstrap-servers=**** spring.kafka.consumer.group-id=vprGroup spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.ByteArraySerializer spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.ByteArraySerializer 想按照上面的格式这么配也不行 spring.kafka.producer.max-request-size=2097152

运行flume的agent,出现如下错误

我的代码: ``` agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=spooldir agent.sources.s1.spoolDir=/tmp/logs/tomcat2kafka agent.sources.s1.channels=c1 agent.channels.c1.type=memory agent.channels.c1.capacity=10000 agent.channels.c1.transactionCapacity=100 #设置Kafka接收 agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink #设置Kafka的broker地址和端口号 agent.sinks.k1.brokerList=222.30.194.254:9092 #设置Kafka的Topic agent.sinks.k1.topic=kafkatest2 #设置序列化方式 agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder agent.sinks.k1.channel=c1 ``` 错误提示: ``` [ERROR - org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:240)] Failed to publish events org.apache.kafka.common.errors.InterruptException: Flush interrupted. at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:546) at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:224) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:57) at org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:425) at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:544) ... 4 more ``` 网上是真没有相应的答案,无奈了,给分求助

kafka的生产消费模型报错

## 消费者启动报错,实在不知道哪里错了,求大神。 生产者: ./kafka-console-producer.sh --broker-list S1PA11:9092,S1PA22:9092,S1PA33:9092 --topic AF_3 12 23 44 5 5 576 [2017-11-12 15:32:57,728] ERROR Error when sending message to topic AF_3 with key: null, value: 2 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) [2017-11-12 15:33:57,731] ERROR Error when sending message to topic AF_3 with key: null, value: 2 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) ^Cgeconline@S1PA33:~/kafka/kafka_2.10-0.9.0.0/bin$ [2017-11-12 15:37:14,884] INFO [Group Metadata Manager on Broker 3]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) 消费者: geconline@S1PA22:~/kafka/kafka_2.10-0.9.0.0/bin$ ./kafka-console-consumer.sh --zookeeper s1pa11:9092,s1pa22:9092,s1pa33:9092 --from-beginning --topic AF_3 No brokers found in ZK.

kafka和spring集成问题:Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties

springboot 和kafka集成提示Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties错误 详细如下: Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:507) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:367) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.buildLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:208) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.findLifecycleMetadata(InitDestroyAnnotationBeanPostProcessor.java:189) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(InitDestroyAnnotationBeanPostProcessor.java:128) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(CommonAnnotationBeanPostProcessor.java:297) ~[spring-context-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:1013) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:547) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 15 common frames omitted Caused by: java.lang.NoClassDefFoundError: org/springframework/kafka/listener/config/ContainerProperties at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_131] at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_131] at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_131] at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:489) ~[spring-core-5.0.13.RELEASE.jar:5.0.13.RELEASE] ... 22 common frames omitted Caused by: java.lang.ClassNotFoundException: org.springframework.kafka.listener.config.ContainerProperties at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_131] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) ~[na:1.8.0_131] at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_131] ... 26 common frames omitted ``` ```

kafka集成storm出现异常

java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) at backtype.storm.daemon.executor$fn__6579$fn__6594$fn__6623.invoke(executor.clj:565) at backtype.storm.util$async_loop$fn__459.invoke(util.clj:463) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:81) at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:79) ... 6 more Caused by: java.lang.RuntimeException: No leader found for partition 1 at storm.kafka.DynamicBrokersReader.getLeaderFor(DynamicBrokersReader.java:120) at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:68) ... 7 more

springboot结合kafka时0毫秒关闭消息生产者

自己在做springboot结合kafka的时候,运行消息生产者,结果console显示了生产者的相关配置以及提示生产者在0毫秒内关闭了消息生产者,然后生产者发送消息失败。想请问下各位大佬这到底是是配置文件的问题还是消息生产者发送消息的时候出现了问题啊。 消息发送者代码: @Component @EnableKafka public class MessageSender { @Autowired private KafkaTemplate<String,String> kafkaTemplate; //private static final MessageSender sender=new MessageSender(); /* * * kafka客户端发送消息 * @param topic 主题 * @param message 消息内容 * @return*/ public boolean sendMessage(String topic,String message) { try { System.out.println("topic"+topic+"message"+message); kafkaTemplate.send(topic, message); } catch (Exception e) { return false; } return true; } } 控制台具体信息如下: 2019-07-09 22:43:34.008 INFO 7916 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = 1 batch.size = 65536 bootstrap.servers = [192.168.2.2:9092] buffer.memory = 524288 client.id = compression.type = none connections.max.idle.ms = 540000 enable.idempotence = false interceptor.classes = null key.serializer = class org.apache.kafka.common.serialization.StringDeserializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringDeserializer 2019-07-09 22:43:34.019 INFO 7916 --- [nio-8080-exec-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. 2019-07-09 22:43:34.040 DEBUG 7916 --- [nio-8080-exec-1] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade@2e251d9f 2019-07-09 22:43:34.077 DEBUG 7916 --- [nio-8080-exec-2] o.s.b.w.s.f.OrderedRequestContextFilter : Bound request context to thread: org.apache.catalina.connector.RequestFacade@2e251d9f 2019-07-09 22:43:34.112 DEBUG 7916 --- [nio-8080-exec-2] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade@2e251d9f ``` ```

log4j2 日志滚动问题?

现在项目是用log4j2.xml配置日志的,当前的日志滚动策略最多是10个日志,每个最大100M,info.log满了,创建info-1.log,满了到info-2.log一直滚动info-10.log,配置如下: ``` <RollingFile name="RollingFile" fileName="log/info.log" filePattern="log/info-%i.log" append="true"> <PatternLayout pattern="%d{DEFAULT} %c %m%n" /> <Policies> <SizeBasedTriggeringPolicy size="100 MB"/> </Policies> <DefaultRolloverStrategy max="10"/> </RollingFile> ``` 我现在想要的效果是,直接从info-1.log开始,一直写到info-10.log,info-10.log满了再回到info-1.log。应该怎样配置,官网也没有很好的答案,只能求助各位大神了!

kafak 单机起不来,zk已经起来了

[2018-11-06 11:19:42,152] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:230) at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226) at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251) at kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:226) at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:95) at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1580) at kafka.server.KafkaServer.kafka$server$KafkaServer$$createZkClient$1(KafkaServer.scala:348) at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:372) at kafka.server.KafkaServer.startup(KafkaServer.scala:202) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2018-11-06 11:19:42,157] INFO shutting down (kafka.server.KafkaServer) [2018-11-06 11:19:42,161] WARN (kafka.utils.CoreUtils$) java.lang.NullPointerException at kafka.server.KafkaServer$$anonfun$shutdown$5.apply$mcV$sp(KafkaServer.scala:579) at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86) at kafka.server.KafkaServer.shutdown(KafkaServer.scala:579) at kafka.server.KafkaServer.startup(KafkaServer.scala:329) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala)

kafka ArrayIndexOutOfBoundsException: 18

在kafka上出了下面这个问题,上网查了下都说是新版的kafka clien向旧版的kafka发送请求,旧版的kafka(<0.10)不支持ApiVersion(key:18) Request,造成的,但是我所有的produce,consumer,kafka服务器上装的kafka clien都是0.9.0.1,应该不会出现这个问题才对,为什么?求各位大神指点 ``` [2018-10-25 10:03:17,919] INFO [Kafka Server 0], started (kafka.server.KafkaServer) [2018-10-25 10:03:18,080] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [topic-test,0] (kafka.server.ReplicaFetcherManager) [2018-10-25 10:03:18,099] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [topic-test,0] (kafka.server.ReplicaFetcherManager) [2018-10-25 10:03:48,864] ERROR Processor got uncaught exception. (kafka.network.Processor) java.lang.ArrayIndexOutOfBoundsException: 18 at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68) at org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39) at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:79) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421) at scala.collection.Iterator$class.foreach(Iterator.scala:742) at scala.collection.AbstractIterator.foreach(Iterator.scala:1194) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.network.Processor.run(SocketServer.scala:421) at java.lang.Thread.run(Thread.java:748) ```

kafka自定义producer链接topic错误

kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries. [2017-10-25 22:02:15,757] ERROR Failed to send requests for topics xutongtp with correlation ids in [0,12] (kafka.producer.async.DefaultEventHandler:99) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:98) at kafka.producer.Producer.send(Producer.scala:78) at kafka.javaapi.producer.Producer.send(Producer.scala:35) at kafka.transwarp.io.KafkaProducer.produce(KafkaProducer.java:34) at kafka.transwarp.io.KafkaMain.main(KafkaMain.java:7)

kafka-node环境下producer的ready事件没有触发是什么原因

创建了一个producer,触发ready事件后执行send发送消息,发送几条消息后,ready事件不再被触发,后续消息无法发送。请问是哪里出了问题? 代码如下: producer.on('ready', function(err, result) { console.log('kafka_server.send:ready ' + message); console.log('err = ' + err); console.log('result = ' + result); console.log(err || result); producer.send(payloads, function(err, data) { console.log('producer.send:' + err) if (err === null) server_manager.tcc_server.netlog.info('Kafka send msg [%s] ok!', data.toString()); else server_manager.tcc_server.netlog.info('Kafka send msg error : - %s', err); }); });

storm kafka整合报错,请大神帮忙看看啥情况

70308 [Thread-29-spout-read-kafka] INFO s.k.ZkCoordinator - Task [1/1] New partition managers: [Partition{host=pamshost02:9092, partition=9}, Partition{host=pamshost02:9092, partition=8}, Partition{host=pamshost02:9092, partition=7}, Partition{host=pamshost02:9092, partition=6}, Partition{host=pamshost02:9092, partition=5}, Partition{host=pamshost02:9092, partition=4}, Partition{host=pamshost02:9092, partition=3}, Partition{host=pamshost02:9092, partition=0}, Partition{host=pamshost02:9092, partition=1}, Partition{host=pamshost02:9092, partition=2}] 70599 [Thread-29-spout-read-kafka] INFO s.k.PartitionManager - Read partition information from: /detect/readKafka/partition_9 --> null 91326 [Thread-29-spout-read-kafka] INFO k.c.SimpleConsumer - Reconnect due to socket error: java.nio.channels.ClosedChannelException 91327 [Thread-29-spout-read-kafka] ERROR b.s.util - Async loop died! java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91329 [Thread-29-spout-read-kafka] ERROR b.s.d.executor - java.lang.RuntimeException: java.nio.channels.ClosedChannelException at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[storm-kafka-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$fn__5624$fn__5639$fn__5670.invoke(executor.clj:607) ~[storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[kafka_2.11-0.9.0.0.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) ~[kafka_2.11-0.9.0.0.jar:?] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:74) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:64) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.PartitionManager.<init>(PartitionManager.java:89) ~[storm-kafka-0.10.0.jar:0.10.0] at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[storm-kafka-0.10.0.jar:0.10.0] ... 6 more 91535 [Thread-29-spout-read-kafka] ERROR b.s.util - Halting process: ("Worker died") java.lang.RuntimeException: ("Worker died") at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:336) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.6.0.jar:?] at backtype.storm.daemon.worker$fn__7184$fn__7185.invoke(worker.clj:532) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.daemon.executor$mk_executor_data$fn__5523$fn__5524.invoke(executor.clj:261) [storm-core-0.10.0.jar:0.10.0] at backtype.storm.util$async_loop$fn__545.invoke(util.clj:489) [storm-core-0.10.0.jar:0.10.0] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75]

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

ArrayList源码分析(入门篇)

ArrayList源码分析 前言: 写这篇博客的主要原因是,在我上一次参加千牵科技Java实习生面试时,有被面试官问到ArrayList为什么查找的速度较快,插入和删除的速度较慢?当时我回答得不好,很大的一部分原因是因为我没有阅读过ArrayList源码,虽然最后收到Offer了,但我拒绝了,打算寒假学得再深入些再广泛些,下学期开学后再去投递其他更好的公司。为了更加深入理解ArrayList,也为

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

String s = new String(" a ") 到底产生几个对象?

老生常谈的一个梗,到2020了还在争论,你们一天天的,哎哎哎,我不是针对你一个,我是说在座的各位都是人才! 上图红色的这3个箭头,对于通过new产生一个字符串(”宜春”)时,会先去常量池中查找是否已经有了”宜春”对象,如果没有则在常量池中创建一个此字符串对象,然后堆中再创建一个常量池中此”宜春”对象的拷贝对象。 也就是说准确答案是产生了一个或两个对象,如果常量池中原来没有 ”宜春” ,就是两个。...

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

玩转springboot启动banner定义所得

最近接手了一个springboot项目,不是不熟悉这个框架,启动时打印的信息吸引了我。 这不是我熟悉的常用springboot的打印信息啊,我打开自己的项目: 还真是的,不用默认的感觉也挺高大上的。一时兴起,就去研究了一下源代码,还正是有些收获,稍后我会总结一下。正常情况下做为一个老程序员,是不会对这种小儿科感兴趣的,不就是一个控制台打印嘛。哈哈! 于是出于最初的好奇,研究了项目的源代码。看到

带了6个月的徒弟当了面试官,而身为高级工程师的我天天修Bug......

即将毕业的应届毕业生一枚,现在只拿到了两家offer,但最近听到一些消息,其中一个offer,我这个组据说客户很少,很有可能整组被裁掉。 想问大家: 如果我刚入职这个组就被裁了怎么办呢? 大家都是什么时候知道自己要被裁了的? 面试软技能指导: BQ/Project/Resume 试听内容: 除了刷题,还有哪些技能是拿到offer不可或缺的要素 如何提升面试软实力:简历, 行为面试,沟通能...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

你打算用Java 8一辈子都不打算升级到Java 14,真香

我们程序员应该抱着尝鲜、猎奇的心态,否则就容易固步自封,技术停滞不前。

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《经典算法案例》01-08:如何使用质数设计扫雷(Minesweeper)游戏

我们都玩过Windows操作系统中的经典游戏扫雷(Minesweeper),如果把质数当作一颗雷,那么,表格中红色的数字哪些是雷(质数)?您能找出多少个呢?文中用列表的方式罗列了10000以内的自然数、质数(素数),6的倍数等,方便大家观察质数的分布规律及特性,以便对算法求解有指导意义。另外,判断质数是初学算法,理解算法重要性的一个非常好的案例。

立即提问
相关内容推荐