kafka生产者发消息时无法更新元数据

十万火急,求助
kafka的broker能够正常启动,zookeeper没有问题,使用生产者发送消息失败,提示获取元数据失败
[root@newkafka2 ~]# /root/kafka2/bin/kafka-console-producer.sh -broker-list server2:9092 -topic newkafka2

111111111111
[2017-10-20 15:42:14,048] ERROR Error when sending message to topic newkafka2 with key: null, value: 12 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2017-10-20 15:43:21,289] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-10-20 15:45:28,516] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-10-20 15:47:35,876] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
图片说明

4个回答

提交偏移量超时,你这种情况基本上是连接问题

http://blog.csdn.net/tianbianlan/arti cle/deta ils/4638 7039

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
kafka里面关于生产者SDK如何推送消息给kafka集群

kafka里面关于生产者SDK如何推送消息给kafka集群,求各位给解答一下,谢谢

在docker容器里kafka生产者(java客户端)消息发送不出去!!

在我们项目实施过程,部署在docker容器里应用需要用kafka客户端发送不出消息,但是同样应用部署在物理机(该容器所在物理机)上,kafka客户端可以发送消息;然后我们在docker容器里的kafka客户端生产者增加调整两个参数linger.ms、batch.size,这样在容器里kafa客户端就可以发送。现在不知道在容器里和在物理机上对于linger.ms、batch.size这两个默认参数有什么区别?

kafka1.0.0的client,生产者生产数据失败

配置: ``` Properties props = new Properties(); //broker地址 props.put("bootstrap.servers", "39.108.61.252:9092,39.108.61.252:9093,39.108.61.252:9094"); //请求时候需要验证 props.put("acks", "0"); //请求失败时候需要重试 props.put("retries", 1); //生产者就会尝试将记录组合成一个batch的请求。 这有助于客户端和服务器的性能。不能大于此默认值,否则浪费内存,反而降低吞吐量 //props.put("batch.size", 16384); //汇聚一定时间内的记录一起发出 // props.put("linger.ms", 50); //内存缓存区大小 props.put("buffer.memory", 33554432); //指定消息key序列化方式 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //指定消息本身的序列化方式 props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<>(props); ``` 生产数据 没有Thread.sleep()就不能成功发送数据!,有了就可已在消费者端接受到数据。若把Thread.sleep()删除,在生产末尾加上close()方法也能成功生产 ``` for (int i = 0; i < 10; i++) { try { Thread.sleep(50); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } kafkaProducer.send(new ProducerRecord<>("topic_user_general_info_update", "simpleKey", "value-"+i)); } ``` 困扰很久,不知道配置还是哪里问题。

【kafka】kafka消息生产者抛异常

服务器正常运行,kafka本来没有问题,从某一时刻开始,就一直抛异常了,重启服务器后恢复正常。 请问是什么原因? 绝大部分消息抛异常了,还有很小一部分没有抛异常,但也没收到,这些丢失但没有异常的又是什么原因? 已知情况:kafka消息生产者 没有指定partition ![kafka消息生产者异常](https://img-ask.csdn.net/upload/201804/28/1524849596_87543.png)

Kafka生产者未在分区上分发消息

<div class="post-text" itemprop="text"> <p>I am creating kafka topic as following:</p> <pre><code>kafka-topics --create --zookeeper xx.xxx.xx:2181 --replication-factor 2 --partitions 200 --topic test6 --config retention.ms=900000 </code></pre> <p>and then I produce messages with golang using the following library:</p> <pre><code> "gopkg.in/confluentinc/confluent-kafka-go.v1/kafka" </code></pre> <p>the producer configuration looks like this:</p> <pre><code> for _, message := range bigslice { topic := "test6" p.Produce(&amp;kafka.Message{ TopicPartition: kafka.TopicPartition{Topic: &amp;topic}, Value: []byte(message), }, nil) } </code></pre> <p>the problem that I've sent more than 200K messages but they all lands in partition 0.</p> <p>what could be wrong in this situation?</p> </div>

springboot结合kafka时0毫秒关闭消息生产者

自己在做springboot结合kafka的时候,运行消息生产者,结果console显示了生产者的相关配置以及提示生产者在0毫秒内关闭了消息生产者,然后生产者发送消息失败。想请问下各位大佬这到底是是配置文件的问题还是消息生产者发送消息的时候出现了问题啊。 消息发送者代码: @Component @EnableKafka public class MessageSender { @Autowired private KafkaTemplate<String,String> kafkaTemplate; //private static final MessageSender sender=new MessageSender(); /* * * kafka客户端发送消息 * @param topic 主题 * @param message 消息内容 * @return*/ public boolean sendMessage(String topic,String message) { try { System.out.println("topic"+topic+"message"+message); kafkaTemplate.send(topic, message); } catch (Exception e) { return false; } return true; } } 控制台具体信息如下: 2019-07-09 22:43:34.008 INFO 7916 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = 1 batch.size = 65536 bootstrap.servers = [192.168.2.2:9092] buffer.memory = 524288 client.id = compression.type = none connections.max.idle.ms = 540000 enable.idempotence = false interceptor.classes = null key.serializer = class org.apache.kafka.common.serialization.StringDeserializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringDeserializer 2019-07-09 22:43:34.019 INFO 7916 --- [nio-8080-exec-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. 2019-07-09 22:43:34.040 DEBUG 7916 --- [nio-8080-exec-1] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade@2e251d9f 2019-07-09 22:43:34.077 DEBUG 7916 --- [nio-8080-exec-2] o.s.b.w.s.f.OrderedRequestContextFilter : Bound request context to thread: org.apache.catalina.connector.RequestFacade@2e251d9f 2019-07-09 22:43:34.112 DEBUG 7916 --- [nio-8080-exec-2] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade@2e251d9f ``` ```

kafka消费者无法消费信息

在生产环境部署kafka集群和消费者服务器后,通过logstash向kafka集群发送实时日志,消费者也能正常消费信息。但是两分钟之后消费者就停止消费信息了,想问下各位老师如何排查问题点在哪里。 1:查看了kafka服务器的日志,logstash还在向kafka推实时日志,kafka集群日志留存时间是两个小时。 2:kafka消费者一共有两台,两台都在同时运行。 3:kafka集群有三台服务器,查问题的时候发现,kafka消费者只连接到了一台broker上,不知道这是不是原因所在。

docker 安装kafka一直无法测试生产者生产消息成功

# 1.docker 安装kafka一直无法测试生产者生产消息成功 docker安装完kafka之后,创建topic没有报错 bash-4.4# kafka-topics.sh --create --zookeeper 118.190.26.133:2181 --replication-factor 1 --partitions 1 --topic mytest_topic WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both. Created topic mytest_topic. 但是运行生产者时候出现了一下错误 bash-4.4# kafka-console-producer.sh --broker-list 118.190.26.133:9092 --topic mytest_topic >[2020-04-20 02:50:33,086] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,087] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,175] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,175] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,277] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,277] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,480] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,480] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,934] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:33,934] WARN [Producer clientId=console-producer] Bootstrap broker 118.190.26.133:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2020-04-20 02:50:34,741] WARN [Producer clientId=console-producer] Connection to node -1 (/118.190.26.133:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 以下是server.propertise配置文件信息 connect-distributed.properties connect-log4j.properties consumer.properties server.properties zookeeper.properties bash-4.4# vi server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 advertised.host.name=master ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 listeners=PLAINTEXT://0.0.0.0:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). advertised.listeners=PLAINTEXT://118.190.26.133:9092

kafka 消费者 获取消息

activeqmq 都是broker push 到 消费者,消费者 建立 messageListener 监听器 就可以 获取消息,但kafka 是 需要去broker pull消息, 怎么才能知道 broker中 已经 有了对应 topic 呢 ?定时 获取?

kafka消息传递中文乱码

![图片说明](https://img-ask.csdn.net/upload/201912/13/1576215935_606006.png)cmd操作kafka 生产者与消费者 传递之后是乱码 后通过chcp改为utf-8编码格式 直接就空了 代码控制台输出是没有问题的,请问该怎么解决???

启动kafka 消费者有时获取不到消息

kafka消费者启动的时候有时候不能获取到消息,但是重启后就可以了,有时候还要重启好多次。。。不知道是为什么,希望大神能指导一下。 [ INFO ] [2016-09-29 14:34:53] org.hibernate.validator.internal.util.Version [30] - HV000001: Hibernate Validator 5.2.4.Final [ INFO ] [2016-09-29 14:34:53] com.coocaa.salad.stat.ApplicationMain [48] - Starting ApplicationMain on zhuxiang with PID 1740 (D:\IdeaProjects\green-salad\adx-stat\target\classes started by zhuxiang in D:\IdeaProjects\green-salad) [ INFO ] [2016-09-29 14:34:53] com.coocaa.salad.stat.ApplicationMain [663] - The following profiles are active: dev [ INFO ] [2016-09-29 14:34:54] org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext [581] - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@5754de72: startup date [Thu Sep 29 14:34:54 CST 2016]; root of context hierarchy 2016-09-29 14:34:55 JRebel: Monitoring Spring bean definitions in 'D:\IdeaProjects\green-salad\adx-stat\target\classes\spring-integration-consumer.xml'. [ INFO ] [2016-09-29 14:34:55] org.springframework.beans.factory.xml.XmlBeanDefinitionReader [317] - Loading XML bean definitions from URL [file:/D:/IdeaProjects/green-salad/adx-stat/target/classes/spring-integration-consumer.xml] [ INFO ] [2016-09-29 14:34:56] org.springframework.beans.factory.config.PropertiesFactoryBean [172] - Loading properties file from URL [jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties] 2016-09-29 14:34:56 JRebel: Monitoring properties in 'jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties'. [ INFO ] [2016-09-29 14:34:56] org.springframework.integration.config.IntegrationRegistrar [330] - No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created. [ INFO ] [2016-09-29 14:34:56] org.springframework.beans.factory.support.DefaultListableBeanFactory [843] - Overriding bean definition for bean 'kafkaConsumerService' with a different definition: replacing [Generic bean: class [com.coocaa.salad.stat.service.KafkaConsumerService]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in file [D:\IdeaProjects\green-salad\adx-stat\target\classes\com\coocaa\salad\stat\service\KafkaConsumerService.class]] with [Generic bean: class [com.coocaa.salad.stat.service.KafkaConsumerService]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in URL [file:/D:/IdeaProjects/green-salad/adx-stat/target/classes/spring-integration-consumer.xml]] [ INFO ] [2016-09-29 14:34:57] org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor [130] - No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created. [ INFO ] [2016-09-29 14:34:57] org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor [158] - No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created. [ INFO ] [2016-09-29 14:34:57] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [class org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$3dea2e76] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) [ INFO ] [2016-09-29 14:34:58] org.springframework.beans.factory.config.PropertiesFactoryBean [172] - Loading properties file from URL [jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties] 2016-09-29 14:34:58 JRebel: Monitoring properties in 'jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties'. [ INFO ] [2016-09-29 14:34:58] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'integrationGlobalProperties' of type [class org.springframework.beans.factory.config.PropertiesFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) [ INFO ] [2016-09-29 14:34:58] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'integrationGlobalProperties' of type [class java.util.Properties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) [ INFO ] [2016-09-29 14:34:59] org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer [88] - Tomcat initialized with port(s): 8081 (http) spring-integration-consumer.xml内容如下: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration" xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka" xmlns:task="http://www.springframework.org/schema/task" xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka-1.0.xsd http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd"> <!-- topic test conf --> <int:channel id="inputFromKafka"> <int:dispatcher task-executor="kafkaMessageExecutor"/> </int:channel> <!-- zookeeper配置 可以配置多个 --> <int-kafka:zookeeper-connect id="zookeeperConnect" zk-connect="172.20.135.95:2181,172.20.135.95:2182" zk-connection-timeout="10000" zk-session-timeout="10000" zk-sync-time="2000"/> <!-- channel配置 auto-startup="true" 否则接收不发数据 --> <int-kafka:inbound-channel-adapter id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext" auto-startup="true" channel="inputFromKafka"> <int:poller fixed-delay="1" time-unit="MILLISECONDS"/> </int-kafka:inbound-channel-adapter> <task:executor id="kafkaMessageExecutor" pool-size="8" keep-alive="120" queue-capacity="500"/> <bean id="kafkaDecoder" class="org.springframework.integration.kafka.serializer.common.StringDecoder"/> <bean id="consumerProperties" class="org.springframework.beans.factory.config.PropertiesFactoryBean"> <property name="properties"> <props> <prop key="auto.offset.reset">smallest</prop> <prop key="socket.receive.buffer.bytes">10485760</prop> <!-- 10M --> <prop key="fetch.message.max.bytes">5242880</prop> <prop key="auto.commit.interval.ms">1000</prop> <prop key="auto.commit.enables">true</prop> </props> </property> </bean> <!-- 消息接收的BEEN --> <bean id="kafkaConsumerService" class="com.coocaa.salad.stat.service.KafkaConsumerService"/> <!-- 指定接收的方法 --> <int:outbound-channel-adapter channel="inputFromKafka" ref="kafkaConsumerService" method="processMessage"/> <int-kafka:consumer-context id="consumerContext" consumer-timeout="1000" zookeeper-connect="zookeeperConnect" consumer-properties="consumerProperties"> <int-kafka:consumer-configurations> <int-kafka:consumer-configuration group-id="group-4" value-decoder="kafkaDecoder" key-decoder="kafkaDecoder" max-messages="5000"> <!-- 两个TOPIC配置 --> <int-kafka:topic id="clientsRequests2" streams="4"/> <!--<int-kafka:topic id="sunneytopic" streams="4" />--> </int-kafka:consumer-configuration> </int-kafka:consumer-configurations> </int-kafka:consumer-context> </beans> kafka版本0.10的

Kafka消费者为什么无法接收消息?

我在visual sdudio中使用c#。创建了两个项目,一个是生产者的,一个是消费者。我想在WinForm上通过窗体发送和接收消息。最开始,我实现了窗体发消息、cmd接收消息,也就是说producer到topic的过程是打通了的。 结果是这样的,我在窗体里输入什么,就在cmd输出什么(有用json做了解析) ![图片说明](https://img-ask.csdn.net/upload/202005/19/1589849111_706909.png) 在运行代码之前,我已经启动zookeeper和kafka: ```start zookeeper: zookeeper-server-start.bat ../../config/zookeeper.properties``` 然后是kafka: ```start kafka: kafka-server-start.bat ../../config/server.properties``` 然后是打开消费者: ```kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning``` producer的代码: ``` string topic = "test"; consumerMsg = entry.Key + " is: " + entry.Value; KafkaNet.Protocol.Message kafkaProducerMsg = new KafkaNet.Protocol.Message(consumerMsg); var options = new KafkaOptions(uri); var router = new BrokerRouter(options); var client = new Producer(router); client.SendMessageAsync(topic, new List<KafkaNet.Protocol.Message> {kafkaProducerMsg }).Wait();``` 接着我想把consumer也变成窗体,代码如下: ```{class Program{static void Main(string[] args){ Uri uri = new Uri("http://localhost:9092"); string topicName = "test"; var options = new KafkaOptions(uri); var brokerRouter = new BrokerRouter(options); var consumer = new Consumer(new ConsumerOptions(topicName, brokerRouter)); Console.WriteLine("on foreach..."); // I use cw to find out how far I go foreach (var msg in consumer.Consume()) { Console.WriteLine("in foreach..."); Console.WriteLine(Encoding.UTF8.GetString(msg.Value)); } Console.ReadLine(); }}}``` 但是什么都接收不到……consumer.Consumer()里面貌似是空的 output窗口有这样一堆东西: ```Awaiting message from: http://myacount.me.cn:9092/ Received message of size: 36 From: http://myacount.me.cn:9092/ Awaiting message from: http://myacount.me.cn:9092/ Received message of size: 36 From: http://myacount.me.cn:9092/``` 所以consumer其实是收到message的吧?但是为啥窗体啥都不显示呢?可能是我的事件用错了?我尝试了textchange,buttonclick(就是点击OK,就开始接收消息),都没有反应。所以我到底哪里做错了呢?还是漏了什么? 感谢大家的帮助!!!我真的好着急……

用sarama编写Kafka生产者时的时间戳无效

<div class="post-text" itemprop="text"> <p>I have a Kafka instance running (locally, in a Docker) and I created a producer in Go, using the <a href="https://godoc.org/github.com/Shopify/sarama" rel="nofollow noreferrer">sarama package</a>.</p> <p>As I want to use Kafka Streams on my topic, the producer has to embed a timestamp in the messages, or I get this ugly error message: </p> <blockquote> <p>org.apache.kafka.streams.errors.StreamsException: Input record ConsumerRecord(topic = crawler_events, partition = 0, offset = 0, CreateTime = -1, serialized key size = -1, serialized value size = 187, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = {XXX}) has invalid (negative) timestamp. Possibly because a pre-0.10 producer client was used to write this record to Kafka without embedding a timestamp, or because the input topic was created before upgrading the Kafka cluster to 0.10+. Use a different TimestampExtractor to process this data.</p> </blockquote> <p>Here is the portion of code sending the message in my Go program: </p> <pre class="lang-go prettyprint-override"><code>// Init a connection to the Kafka host, // create the producer, // and count successes and errors in delivery func (c *kafkaClient) init() { config := sarama.NewConfig() config.Producer.Return.Successes = true c.config = *config var err error c.producer, err = sarama.NewAsyncProducer(c.hosts, &amp;c.config) if err != nil { panic(err) } go func() { for range c.producer.Successes() { c.successes++ } }() go func() { for range c.producer.Errors() { c.errors++ } }() } // Send a message to the Kafka topic, WITH TIMESTAMP func (c *kafkaClient) send(event string) { message := &amp;sarama.ProducerMessage{ Topic: c.topic, Value: sarama.StringEncoder(event), Timestamp: time.Now(), } c.producer.Input() &lt;- message c.enqueued++ } </code></pre> <p>As you can see, the timestamp I try to send is <code>time.Now()</code>. </p> <p>When I run the console consumer to see the received timestamps:</p> <pre><code>docker-compose exec kafka /opt/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic crawler_events \ --from-beginning --property print.timestamp=true </code></pre> <p>I see they are all "-1": </p> <pre><code>CreateTime:-1 {"XXX"} </code></pre> <p>When adding a message to the topic with the console producer, I have the expected timestamps like: </p> <pre><code>CreateTime:1539010180284 hello </code></pre> <p>What am I doing wrong? Thanks for your help. </p> </div>

Kafka消费者组丢失未提交的消息

<div class="post-text" itemprop="text"> <p>I am using consumer group with just one consumer, just one broker ( docker wurstmeister image ). It's decided in a code to commit offset or not - if code returns error then message is not commited. I need to ensure that system does not lose any message - even if that means retrying same msg forever ( for now ;) ). For testing this I have created simple handler which does not commit offset in case of 'error' string send as message to kafka. All other strings are commited. </p> <pre><code>kafka-console-producer --broker-list localhost:9092 --topic test &gt;this will be commited </code></pre> <p>Now running </p> <pre><code>kafka-run-class kafka.admin.ConsumerGroupCommand --bootstrap-server localhost:9092 --group michalgrupa --describe </code></pre> <p>returns</p> <pre><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID test 0 13 13 0 </code></pre> <p>so thats ok, there is no lag. Now we pass 'error' string to fake that something bad happened and message is not commited:</p> <pre><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID test 0 13 14 1 </code></pre> <p>Current offset stays at right position + there is 1 lagged message. Now if we pass correct message again offset will move on to 15:</p> <p><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG test 0 15 15</code> </p> <p>and message number 14 will not be picked up ever again. Is it default behaviour? Do I need to trace last offset and load message by it+1 manually? I have set commit interval to 0 to hopefully not use any auto.commit mechanism.</p> <p>fetch/commit code:</p> <pre><code>go func() { for { ctx := context.Background() m, err := mr.brokerReader.FetchMessage(ctx) if err != nil { break } if err := msgFunc(m); err != nil { log.Errorf("# messaging # cannot commit a message: %v", err) continue } // commit message if no error if err := mr.brokerReader.CommitMessages(ctx, m); err != nil { // should we do something else to just logging not committed message? log.Errorf("cannot commit message [%s] %v/%v: %s = %s; with error: %v", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value), err) } } }() </code></pre> <p>reader configuration:</p> <pre><code>kafkaReader := kafka.NewReader(kafka.ReaderConfig{ Brokers: brokers, GroupID: groupID, Topic: topic, CommitInterval: 0, MinBytes: 10e3, MaxBytes: 10e6, }) </code></pre> <p>library used: <a href="https://github.com/segmentio/kafka-go" rel="nofollow noreferrer">https://github.com/segmentio/kafka-go</a></p> </div>

kafka如何对消息进行过滤

kafka如何对消息进行过滤,如传过来的日志消息是有20个字段,现在只想要其中的几个字段,如何解决

kafka 消费者消费不到数据

[root@hzctc-kafka-5d61 ~]# kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group sbs-haodian-message1 --topic Message --zookeeper 10.1.5.61:2181 [2018-04-18 16:43:43,467] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$) Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/sbs-haodian-message1/offsets/Message/8. 用kafka的时候 用命令查看消费组消费情况 报这个错误 其他的消费组是正常的 哪位大神知道这是什么原因导致的 我在消费操作的时候加了缓存锁 每次poll操作之后的间隔时间不确定 可能是10S或者20S或者30S 不过我的sessiontimeiut设置了90s。这个会有什么影响吗

Kafka Consumer 拉取消息

基于0.10,最近在测试consumer端消费集群消息, 设置“一次最大拉取的条数”的参数,但是实际拉取的条数不唯一,如果将其设置成500或600,那么每次拉取的条数就是500或600定值,但是如果设置成1W,那么拉取条数在4k-1W不等,(每条消息的大小是1KB) 所以想请问下,consumer拉取的具体的机制是什么样的,为什么会出现每次拉取的条数是不一样的? 注:消息是之前就已经写入好partition中的。

无法使用Sarama Golang软件包创建Kafka生产者客户端-“客户端/元数据在获取元数据时从代理处出错:EOF”

<div class="post-text" itemprop="text"> <p>Versions: GoLang 1.10.2 Kafka 4.4.1 Docker 18.03.1</p> <p>I'm trying to use Shopify's Sarama package to test out my Kafka instance. I used Docker compose to stand up Kafka/Zookeeper and it is all successfully running. </p> <p>When I try to create a Producer client with Sarama, an error is thrown. </p> <p>When I run the following </p> <pre><code> package main import ( "fmt" "log" "os" "os/signal" "time" "strconv" "github.com/Shopify/sarama" </code></pre> <p>)</p> <pre><code>func main() { // Setup configuration config := sarama.NewConfig() config.Producer.Return.Successes = true config.Producer.Partitioner = sarama.NewRandomPartitioner config.Producer.RequiredAcks = sarama.WaitForAll brokers := []string{"localhost:29092"} producer, err := sarama.NewAsyncProducer(brokers, config) if err != nil { // Should not reach here panic(err) } defer func() { if err := producer.Close(); err != nil { // Should not reach here panic(err) } }() </code></pre> <p>I get this </p> <p>[sarama] 2018/06/12 17:22:05 Initializing new client</p> <p>[sarama] 2018/06/12 17:22:05 client/metadata fetching metadata for all topics from broker localhost:29092</p> <p>[sarama] 2018/06/12 17:22:05 Connected to broker at localhost:29092 (unregistered)</p> <p>[sarama] 2018/06/12 17:22:05 client/metadata got error from broker while fetching metadata: EOF</p> <p>[sarama] 2018/06/12 17:22:05 Closed connection to broker localhost:29092</p> <p>{sarama] 2018/06/12 17:22:05 client/metadata no available broker to send metadata request to</p> <p>[sarama] 2018/06/12 17:22:06 Closing Client panic: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)</p> <p>goroutine 1 [running]: main.main() /Users/benwornom/go/src/github.com/acstech/doppler-events/testprod/main.go:29 +0x3ec exit status 2</p> <p>Sarama did try several times in a row to create a producer client, but failed each time. </p> <p>My understanding of Sarama's "NewAsyncProducer" method is that it calls "NewClient", which is invoked regardless of whether you are creating a Producer or Consumer. NewClient attempts to gather metadata from the Kafka broker, which is failing in my situation. I know it is connecting to the Kafka broker, but once it connects it seems to break. Any advice would be helpful. My network connection is strong, I can't think of anything interfering with the server. As far as I know, I only have one broker and one partition for the existing topic. I don't think I have to manually assign a topic to a broker. If my client is connecting with the broker, why can't I establish a lasting connection for my producer?</p> <p>This is from the kafka log file right before it dies. </p> <p>__consumer_offsets-5 -&gt; Vector(1), connect-offsets-23 -&gt; Vector(1), __consumer_offsets-43 -&gt; Vector(1), __consumer_offsets-32 -&gt; Vector(1), __consumer_offsets-21 -&gt; Vector(1), __consumer_offsets-10 -&gt; Vector(1), connect-offsets-20 -&gt; Vector(1), __consumer_offsets-37 -&gt; Vector(1), connect-offsets-9 -&gt; Vector(1), connect-status-4 -&gt; Vector(1), __consumer_offsets-48 -&gt; Vector(1), __consumer_offsets-40 -&gt; Vector(1), __consumer_offsets-29 -&gt; Vector(1), __consumer_offsets-18 -&gt; Vector(1), connect-offsets-14 -&gt; Vector(1), __consumer_offsets-7 -&gt; Vector(1), __consumer_offsets-34 -&gt; Vector(1), __consumer_offsets-45 -&gt; Vector(1), __consumer_offsets-23 -&gt; Vector(1), connect-offsets-6 -&gt; Vector(1), connect-status-1 -&gt; Vector(1), connect-offsets-17 -&gt; Vector(1), connect-offsets-0 -&gt; Vector(1), connect-offsets-22 -&gt; Vector(1), __consumer_offsets-26 -&gt; Vector(1), connect-offsets-11 -&gt; Vector(1), __consumer_offsets-15 -&gt; Vector(1), __consumer_offsets-4 -&gt; Vector(1), __consumer_offsets-42 -&gt; Vector(1), __consumer_offsets-9 -&gt; Vector(1), __consumer_offsets-31 -&gt; Vector(1), __consumer_offsets-20 -&gt; Vector(1), connect-offsets-3 -&gt; Vector(1), __consumer_offsets-1 -&gt; Vector(1), __consumer_offsets-12 -&gt; Vector(1), connect-offsets-8 -&gt; Vector(1), connect-offsets-19 -&gt; Vector(1), connect-status-3 -&gt; Vector(1), __confluent.support.metrics-0 -&gt; Vector(1), __consumer_offsets-17 -&gt; Vector(1), __consumer_offsets-28 -&gt; Vector(1), __consumer_offsets-6 -&gt; Vector(1), __consumer_offsets-39 -&gt; Vector(1), __consumer_offsets-44 -&gt; Vector(1), connect-offsets-16 -&gt; Vector(1), connect-status-0 -&gt; Vector(1), connect-offsets-5 -&gt; Vector(1), connect-offsets-21 -&gt; Vector(1), __consumer_offsets-47 -&gt; Vector(1), __consumer_offsets-36 -&gt; Vector(1), __consumer_offsets-14 -&gt; Vector(1), __consumer_offsets-25 -&gt; Vector(1), __consumer_offsets-3 -&gt; Vector(1), __consumer_offsets-30 -&gt; Vector(1), __consumer_offsets-41 -&gt; Vector(1), connect-offsets-13 -&gt; Vector(1), connect-offsets-24 -&gt; Vector(1), connect-offsets-2 -&gt; Vector(1), connect-configs-0 -&gt; Vector(1), __consumer_offsets-11 -&gt; Vector(1), __consumer_offsets-22 -&gt; Vector(1), __consumer_offsets-33 -&gt; Vector(1), __consumer_offsets-0 -&gt; Vector(1), connect-offsets-7 -&gt; Vector(1), connect-offsets-18 -&gt; Vector(1))) (kafka.controller.KafkaController) [36mkafka_1 |[0m [2018-06-12 20:24:47,461] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController) [36mkafka_1 |[0m [2018-06-12 20:24:47,462] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)</p> </div>

连接到Kafka后Golang消费者延迟接收Kafka消息

<div class="post-text" itemprop="text"> <p><em>I'm new to Golang and Kafa so this might seem like a silly question.</em></p> <p>After my Kafka consumer first connects to the Kafka server, why is there a delay (~ 20 secs) between establishing connection to the Kafka server, and receiving the first message?</p> <p>It prints a message right before <code>consumer.Messages()</code> and print another message for each message received. The ~20 sec delay is between the first <code>fmt.Println</code> and second <code>fmt.Println</code>.</p> <pre><code>package main import ( "fmt" "github.com/Shopify/sarama" cluster "github.com/bsm/sarama-cluster" ) func main() { // Create the consumer and listen for new messages consumer := createConsumer() // Create a signal channel to know when we are done done := make(chan bool) // Start processing messages go func() { fmt.Println("Start consuming Kafka messages") for msg := range consumer.Messages() { s := string(msg.Value[:]) fmt.Println("Msg: ", s) } }() &lt;-done } func createConsumer() *cluster.Consumer { // Define our configuration to the cluster config := cluster.NewConfig() config.Consumer.Return.Errors = false config.Group.Return.Notifications = false config.Consumer.Offsets.Initial = sarama.OffsetOldest // Create the consumer brokers := []string{"127.0.0.1:9092"} topics := []string{"orders"} consumer, err := cluster.NewConsumer(brokers, "my-consumer-group", topics, config) if err != nil { log.Fatal("Unable to connect consumer to Kafka") } go handleErrors(consumer) go handleNotifications(consumer) return consumer } </code></pre> <p><strong>docker-compose.yml</strong></p> <pre><code>version: '2' services: zookeeper: image: "confluentinc/cp-zookeeper:5.0.1" hostname: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker-1: image: "confluentinc/cp-enterprise-kafka:5.0.1" hostname: broker-1 depends_on: - zookeeper ports: - "9092:9092" environment: KAFKA_BROKER_ID: 1 KAFKA_BROKER_RACK: rack-a KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1 KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://127.0.0.1:9092' KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter KAFKA_DELETE_TOPIC_ENABLE: "true" KAFKA_JMX_PORT: 9999 KAFKA_JMX_HOSTNAME: 'broker-1' KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092 CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181 CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1 CONFLUENT_METRICS_ENABLE: 'true' CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous' KAFKA_CREATE_TOPICS: "orders:1:1" </code></pre> </div>

C/C++学习指南全套教程

C/C++学习的全套教程,从基本语法,基本原理,到界面开发、网络开发、Linux开发、安全算法,应用尽用。由毕业于清华大学的业内人士执课,为C/C++编程爱好者的教程。

定量遥感中文版 梁顺林著 范闻捷译

这是梁顺林的定量遥感的中文版,由范闻捷等翻译的,是电子版PDF,解决了大家看英文费时费事的问题,希望大家下载看看,一定会有帮助的

YOLOv3目标检测实战:训练自己的数据集

YOLOv3是一种基于深度学习的端到端实时目标检测方法,以速度快见长。本课程将手把手地教大家使用labelImg标注和使用YOLOv3训练自己的数据集。课程分为三个小项目:足球目标检测(单目标检测)、梅西目标检测(单目标检测)、足球和梅西同时目标检测(两目标检测)。 本课程的YOLOv3使用Darknet,在Ubuntu系统上做项目演示。包括:安装Darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 Darknet是使用C语言实现的轻型开源深度学习框架,依赖少,可移植性好,值得深入探究。 除本课程《YOLOv3目标检测实战:训练自己的数据集》外,本人推出了有关YOLOv3目标检测的系列课程,请持续关注该系列的其它课程视频,包括: 《YOLOv3目标检测实战:交通标志识别》 《YOLOv3目标检测:原理与源码解析》 《YOLOv3目标检测:网络模型改进方法》 敬请关注并选择学习!

sql语句 异常 Err] 1064 - You have an error in your SQL syntax; check the manual that corresponds to your

在我们开发的工程中,有时候会报 [Err] 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ------ 这种异常 不用多想,肯定是我们的sql语句出现问题,下面...

浪潮集团 往年的软件类 笔试题 比较详细的哦

浪潮集团 往年的软件类 笔试题 比较详细的哦

2019 AI开发者大会

2019 AI开发者大会(AI ProCon 2019)是由中国IT社区CSDN主办的AI技术与产业年度盛会。多年经验淬炼,如今蓄势待发:2019年9月6-7日,大会将有近百位中美顶尖AI专家、知名企业代表以及千余名AI开发者齐聚北京,进行技术解读和产业论证。我们不空谈口号,只谈技术,诚挚邀请AI业内人士一起共铸人工智能新篇章!

I2c串口通信实现加速度传感器和FPGA的交流

此代码能实现加速度传感器与FPGA之间的交流,从而测出运动物体的加速度。

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

微信公众平台开发入门

本套课程的设计完全是为初学者量身打造,课程内容由浅入深,课程讲解通俗易懂,代码实现简洁清晰。通过本课程的学习,学员能够入门微信公众平台开发,能够胜任企业级的订阅号、服务号、企业号的应用开发工作。 通过本课程的学习,学员能够对微信公众平台有一个清晰的、系统性的认识。例如,公众号是什么,它有什么特点,它能做什么,怎么开发公众号。 其次,通过本课程的学习,学员能够掌握微信公众平台开发的方法、技术和应用实现。例如,开发者文档怎么看,开发环境怎么搭建,基本的消息交互如何实现,常用的方法技巧有哪些,真实应用怎么开发。

机器学习初学者必会的案例精讲

通过六个实际的编码项目,带领同学入门人工智能。这些项目涉及机器学习(回归,分类,聚类),深度学习(神经网络),底层数学算法,Weka数据挖掘,利用Git开源项目实战等。

eclipseme 1.7.9

eclipse 出了新的eclipseme插件,官方有下载,但特慢,我都下了大半天(可能自己网速差)。有急需要的朋友可以下哦。。。

Spring Boot -01- 快速入门篇(图文教程)

Spring Boot -01- 快速入门篇 今天开始不断整理 Spring Boot 2.0 版本学习笔记,大家可以在博客看到我的笔记,然后大家想看视频课程也可以到【慕课网】手机 app,去找【Spring Boot 2.0 深度实践】的课程,令人开心的是,课程完全免费! 什么是 Spring Boot? Spring Boot 是由 Pivotal 团队提供的全新框架。Spring Boot...

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

最简单的倍频verilog程序(Quartus II)

一个工程文件 几段简单的代码 一个输入一个输出(50Mhz倍频到100Mhz)

计算机组成原理实验教程

西北工业大学计算机组成原理实验课唐都仪器实验帮助,同实验指导书。分为运算器,存储器,控制器,模型计算机,输入输出系统5个章节

4小时玩转微信小程序——基础入门与微信支付实战

这是一个门针对零基础学员学习微信小程序开发的视频教学课程。课程采用腾讯官方文档作为教程的唯一技术资料来源。杜绝网络上质量良莠不齐的资料给学员学习带来的障碍。 视频课程按照开发工具的下载、安装、使用、程序结构、视图层、逻辑层、微信小程序等几个部分组织课程,详细讲解整个小程序的开发过程

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

基于RSA通信密钥分发的加密通信

基于RSA通信密钥分发的加密通信,采用pycrypto中的RSA、AES模块实现

不同变质程度煤尘爆炸残留气体特征研究

为分析不同变质程度煤尘爆炸残留气体成分的特征规律,利用水平管道煤尘爆炸实验装置进行了贫瘦煤、肥煤、气煤、长焰煤4种不同变质程度的煤尘爆炸实验,研究了不同变质程度煤尘爆炸后气体残留物含量的差异,并对气体

设计模式(JAVA语言实现)--20种设计模式附带源码

课程亮点: 课程培训详细的笔记以及实例代码,让学员开始掌握设计模式知识点 课程内容: 工厂模式、桥接模式、组合模式、装饰器模式、外观模式、享元模式、原型模型、代理模式、单例模式、适配器模式 策略模式、模板方法模式、观察者模式、迭代器模式、责任链模式、命令模式、备忘录模式、状态模式、访问者模式 课程特色: 笔记设计模式,用笔记串连所有知识点,让学员从一点一滴积累,学习过程无压力 笔记标题采用关键字标识法,帮助学员更加容易记住知识点 笔记以超链接形式让知识点关联起来,形式知识体系 采用先概念后实例再应用方式,知识点深入浅出 提供授课内容笔记作为课后复习以及工作备查工具 部分图表(电脑PC端查看):

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

软件测试2小时入门

本课程内容系统、全面、简洁、通俗易懂,通过2个多小时的介绍,让大家对软件测试有个系统的理解和认识,具备基本的软件测试理论基础。 主要内容分为5个部分: 1 软件测试概述,了解测试是什么、测试的对象、原则、流程、方法、模型;&nbsp; 2.常用的黑盒测试用例设计方法及示例演示;&nbsp; 3 常用白盒测试用例设计方法及示例演示;&nbsp; 4.自动化测试优缺点、使用范围及示例‘;&nbsp; 5.测试经验谈。

几率大的Redis面试题(含答案)

本文的面试题如下: Redis 持久化机制 缓存雪崩、缓存穿透、缓存预热、缓存更新、缓存降级等问题 热点数据和冷数据是什么 Memcache与Redis的区别都有哪些? 单线程的redis为什么这么快 redis的数据类型,以及每种数据类型的使用场景,Redis 内部结构 redis的过期策略以及内存淘汰机制【~】 Redis 为什么是单线程的,优点 如何解决redis的并发竞争key问题 Red...

手把手实现Java图书管理系统(附源码)

【超实用课程内容】 本课程演示的是一套基于Java的SSM框架实现的图书管理系统,主要针对计算机相关专业的正在做毕设的学生与需要项目实战练习的java人群。详细介绍了图书管理系统的实现,包括:环境搭建、系统业务、技术实现、项目运行、功能演示、系统扩展等,以通俗易懂的方式,手把手的带你从零开始运行本套图书管理系统,该项目附带全部源码可作为毕设使用。 【课程如何观看?】 PC端:https://edu.csdn.net/course/detail/27513 移动端:CSDN 学院APP(注意不是CSDN APP哦) 本课程为录播课,课程2年有效观看时长,大家可以抓紧时间学习后一起讨论哦~ 【学员专享增值服务】 源码开放 课件、课程案例代码完全开放给你,你可以根据所学知识,自行修改、优化

jsp+servlet入门项目实例

jsp+servlet实现班级信息管理项目

winfrom中嵌套html,跟html的交互

winfrom中嵌套html,跟html的交互,源码就在里面一看就懂,很简单

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

python实现数字水印添加与提取及鲁棒性测试(GUI,基于DCT,含测试图片)

由python写的GUI,可以实现数字水印的添加与提取,提取是根据添加系数的相关性,实现了盲提取。含有两种攻击测试方法(高斯低通滤波、高斯白噪声)。基于python2.7,watermark.py为主

Xshell6完美破解版,亲测可用

Xshell6破解版,亲测可用,分享给大家。直接解压即可使用

你连存活到JDK8中著名的Bug都不知道,我怎么敢给你加薪

CopyOnWriteArrayList.java和ArrayList.java,这2个类的构造函数,注释中有一句话 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public ArrayList(Collection&lt;? ...

相关热词 c#对文件改写权限 c#中tostring c#支付宝回掉 c#转换成数字 c#判断除法是否有模 c# 横向chart c#控件选择多个 c#报表如何锁定表头 c#分级显示数据 c# 不区分大小写替换
立即提问
相关内容推荐