启动kafka 消费者有时获取不到消息 5C

kafka消费者启动的时候有时候不能获取到消息,但是重启后就可以了,有时候还要重启好多次。。。不知道是为什么,希望大神能指导一下。

[ INFO ] [2016-09-29 14:34:53] org.hibernate.validator.internal.util.Version [30] - HV000001: Hibernate Validator 5.2.4.Final
[ INFO ] [2016-09-29 14:34:53] com.coocaa.salad.stat.ApplicationMain [48] - Starting ApplicationMain on zhuxiang with PID 1740 (D:\IdeaProjects\green-salad\adx-stat\target\classes started by zhuxiang in D:\IdeaProjects\green-salad)
[ INFO ] [2016-09-29 14:34:53] com.coocaa.salad.stat.ApplicationMain [663] - The following profiles are active: dev
[ INFO ] [2016-09-29 14:34:54] org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext [581] - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@5754de72: startup date [Thu Sep 29 14:34:54 CST 2016]; root of context hierarchy
2016-09-29 14:34:55 JRebel: Monitoring Spring bean definitions in 'D:\IdeaProjects\green-salad\adx-stat\target\classes\spring-integration-consumer.xml'.
[ INFO ] [2016-09-29 14:34:55] org.springframework.beans.factory.xml.XmlBeanDefinitionReader [317] - Loading XML bean definitions from URL [file:/D:/IdeaProjects/green-salad/adx-stat/target/classes/spring-integration-consumer.xml]
[ INFO ] [2016-09-29 14:34:56] org.springframework.beans.factory.config.PropertiesFactoryBean [172] - Loading properties file from URL [jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties]
2016-09-29 14:34:56 JRebel: Monitoring properties in 'jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties'.
[ INFO ] [2016-09-29 14:34:56] org.springframework.integration.config.IntegrationRegistrar [330] - No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created.
[ INFO ] [2016-09-29 14:34:56] org.springframework.beans.factory.support.DefaultListableBeanFactory [843] - Overriding bean definition for bean 'kafkaConsumerService' with a different definition: replacing [Generic bean: class [com.coocaa.salad.stat.service.KafkaConsumerService]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in file [D:\IdeaProjects\green-salad\adx-stat\target\classes\com\coocaa\salad\stat\service\KafkaConsumerService.class]] with [Generic bean: class [com.coocaa.salad.stat.service.KafkaConsumerService]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in URL [file:/D:/IdeaProjects/green-salad/adx-stat/target/classes/spring-integration-consumer.xml]]
[ INFO ] [2016-09-29 14:34:57] org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor [130] - No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
[ INFO ] [2016-09-29 14:34:57] org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor [158] - No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created.
[ INFO ] [2016-09-29 14:34:57] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [class org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$3dea2e76] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
[ INFO ] [2016-09-29 14:34:58] org.springframework.beans.factory.config.PropertiesFactoryBean [172] - Loading properties file from URL [jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties]
2016-09-29 14:34:58 JRebel: Monitoring properties in 'jar:file:/D:/maven-repo2/org/springframework/integration/spring-integration-core/4.3.1.RELEASE/spring-integration-core-4.3.1.RELEASE.jar!/META-INF/spring.integration.default.properties'.
[ INFO ] [2016-09-29 14:34:58] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'integrationGlobalProperties' of type [class org.springframework.beans.factory.config.PropertiesFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
[ INFO ] [2016-09-29 14:34:58] org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [328] - Bean 'integrationGlobalProperties' of type [class java.util.Properties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
[ INFO ] [2016-09-29 14:34:59] org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer [88] - Tomcat initialized with port(s): 8081 (http)

spring-integration-consumer.xml内容如下:
<?xml version="1.0" encoding="UTF-8"?>
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka-1.0.xsd
http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">

<!-- topic test conf -->
<int:channel id="inputFromKafka">
    <int:dispatcher task-executor="kafkaMessageExecutor"/>
</int:channel>
<!-- zookeeper配置 可以配置多个 -->
<int-kafka:zookeeper-connect id="zookeeperConnect"
                             zk-connect="172.20.135.95:2181,172.20.135.95:2182" zk-connection-timeout="10000"
                             zk-session-timeout="10000" zk-sync-time="2000"/>

<!-- channel配置 auto-startup="true"  否则接收不发数据 -->
<int-kafka:inbound-channel-adapter
        id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext"
        auto-startup="true" channel="inputFromKafka">
    <int:poller fixed-delay="1" time-unit="MILLISECONDS"/>
</int-kafka:inbound-channel-adapter>
<task:executor id="kafkaMessageExecutor" pool-size="8" keep-alive="120" queue-capacity="500"/>
<bean id="kafkaDecoder"
      class="org.springframework.integration.kafka.serializer.common.StringDecoder"/>

<bean id="consumerProperties"
      class="org.springframework.beans.factory.config.PropertiesFactoryBean">
    <property name="properties">
        <props>
            <prop key="auto.offset.reset">smallest</prop>
            <prop key="socket.receive.buffer.bytes">10485760</prop> <!-- 10M -->
            <prop key="fetch.message.max.bytes">5242880</prop>
            <prop key="auto.commit.interval.ms">1000</prop>
            <prop key="auto.commit.enables">true</prop>
        </props>
    </property>
</bean>
<!-- 消息接收的BEEN -->
<bean id="kafkaConsumerService" class="com.coocaa.salad.stat.service.KafkaConsumerService"/>
<!-- 指定接收的方法 -->
<int:outbound-channel-adapter channel="inputFromKafka"
                              ref="kafkaConsumerService" method="processMessage"/>

<int-kafka:consumer-context id="consumerContext"
                            consumer-timeout="1000" zookeeper-connect="zookeeperConnect"
                            consumer-properties="consumerProperties">
    <int-kafka:consumer-configurations>
        <int-kafka:consumer-configuration
                group-id="group-4" value-decoder="kafkaDecoder" key-decoder="kafkaDecoder"
                max-messages="5000">
            <!-- 两个TOPIC配置 -->
            <int-kafka:topic id="clientsRequests2" streams="4"/>
            <!--<int-kafka:topic id="sunneytopic" streams="4" />-->
        </int-kafka:consumer-configuration>
    </int-kafka:consumer-configurations>
</int-kafka:consumer-context>

kafka版本0.10的

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
kafka消费者无法消费信息

在生产环境部署kafka集群和消费者服务器后,通过logstash向kafka集群发送实时日志,消费者也能正常消费信息。但是两分钟之后消费者就停止消费信息了,想问下各位老师如何排查问题点在哪里。 1:查看了kafka服务器的日志,logstash还在向kafka推实时日志,kafka集群日志留存时间是两个小时。 2:kafka消费者一共有两台,两台都在同时运行。 3:kafka集群有三台服务器,查问题的时候发现,kafka消费者只连接到了一台broker上,不知道这是不是原因所在。

kafka 消费者 获取消息

activeqmq 都是broker push 到 消费者,消费者 建立 messageListener 监听器 就可以 获取消息,但kafka 是 需要去broker pull消息, 怎么才能知道 broker中 已经 有了对应 topic 呢 ?定时 获取?

kafka 消费者消费不到数据

[root@hzctc-kafka-5d61 ~]# kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group sbs-haodian-message1 --topic Message --zookeeper 10.1.5.61:2181 [2018-04-18 16:43:43,467] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$) Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/sbs-haodian-message1/offsets/Message/8. 用kafka的时候 用命令查看消费组消费情况 报这个错误 其他的消费组是正常的 哪位大神知道这是什么原因导致的 我在消费操作的时候加了缓存锁 每次poll操作之后的间隔时间不确定 可能是10S或者20S或者30S 不过我的sessiontimeiut设置了90s。这个会有什么影响吗

Kafka消费者为什么无法接收消息?

我在visual sdudio中使用c#。创建了两个项目,一个是生产者的,一个是消费者。我想在WinForm上通过窗体发送和接收消息。最开始,我实现了窗体发消息、cmd接收消息,也就是说producer到topic的过程是打通了的。 结果是这样的,我在窗体里输入什么,就在cmd输出什么(有用json做了解析) ![图片说明](https://img-ask.csdn.net/upload/202005/19/1589849111_706909.png) 在运行代码之前,我已经启动zookeeper和kafka: ```start zookeeper: zookeeper-server-start.bat ../../config/zookeeper.properties``` 然后是kafka: ```start kafka: kafka-server-start.bat ../../config/server.properties``` 然后是打开消费者: ```kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning``` producer的代码: ``` string topic = "test"; consumerMsg = entry.Key + " is: " + entry.Value; KafkaNet.Protocol.Message kafkaProducerMsg = new KafkaNet.Protocol.Message(consumerMsg); var options = new KafkaOptions(uri); var router = new BrokerRouter(options); var client = new Producer(router); client.SendMessageAsync(topic, new List<KafkaNet.Protocol.Message> {kafkaProducerMsg }).Wait();``` 接着我想把consumer也变成窗体,代码如下: ```{class Program{static void Main(string[] args){ Uri uri = new Uri("http://localhost:9092"); string topicName = "test"; var options = new KafkaOptions(uri); var brokerRouter = new BrokerRouter(options); var consumer = new Consumer(new ConsumerOptions(topicName, brokerRouter)); Console.WriteLine("on foreach..."); // I use cw to find out how far I go foreach (var msg in consumer.Consume()) { Console.WriteLine("in foreach..."); Console.WriteLine(Encoding.UTF8.GetString(msg.Value)); } Console.ReadLine(); }}}``` 但是什么都接收不到……consumer.Consumer()里面貌似是空的 output窗口有这样一堆东西: ```Awaiting message from: http://myacount.me.cn:9092/ Received message of size: 36 From: http://myacount.me.cn:9092/ Awaiting message from: http://myacount.me.cn:9092/ Received message of size: 36 From: http://myacount.me.cn:9092/``` 所以consumer其实是收到message的吧?但是为啥窗体啥都不显示呢?可能是我的事件用错了?我尝试了textchange,buttonclick(就是点击OK,就开始接收消息),都没有反应。所以我到底哪里做错了呢?还是漏了什么? 感谢大家的帮助!!!我真的好着急……

kafka消费者处理慢的情况下如何提高消息处理速度?不允许增加分区

如题.最近的一个面试题,说是考虑kafka理论特性.具体要求我可能有理解错误.如果各位有研究一眼看出是什么问题,谢谢给个提示. 我搜索了下,提高消费性能的有: 增加分区个数(增加消费者并行数)[不允许]; 消费者使用多线程;如果消息处理是CPU密集的加多线程也没用啊; 或许我理解有问题? 换个问题? 生产者1秒生成1W消息.然而此时全部消费者1s只能消费5000,消息处理是纯CPU计算,问:在不添加分区的情况下如何消息处理速度?

kafka消费者速度与什么有关

@KafkaListener(topics = {"CRBKC0002.000"}) public void sendSmsInfoByBizType(String record) { } 假设单机版的kafka,就一个节点。 1、 @KafkaListener注解接受消费者,是不是等这个方法执行完。 这个消费者进程才算消费结束。是不是一个镜像这个方法同时只能执行一次?就是不能连续起多个线程执行这个方法。 2、如果接受到参数就算消费这进程结束,也就是获取这个record消费者进程就结束了,那假设生产者一秒生产100w数据进入kafka。那这边获取参数就算消费者进程消费结束,那是不是相当于瞬间连续起100w这个方法线程执行。可是tomcat就200线程。

kafka消费不到数据问题

kafka集群搭建正常,通过console都能正常生产和消费消息,但是通过JAVA程序就是读取不到消息,更换group都尝试过了 package test; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import kafka.consumer.Consumer; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; public class KafkaConsumer extends Thread { private String topic; public KafkaConsumer(String topic){ super(); this.topic=topic; } @Override public void run(){ //通过properties设置了Consumer的参数,并且创建了连接器,连接到Kafaka ConsumerConnector consumer = createConsumer(); //Map作用指定获取的topic以及partition Map<String,Integer> topicCountMap = new HashMap<String,Integer>(); topicCountMap.put(topic, 3); //consumer连接器获取消息 Map<String,List<KafkaStream<byte[],byte[]>>> messageStreams = consumer.createMessageStreams(topicCountMap); //获取对应的topic中的某一个partition中的数据 KafkaStream<byte[],byte[]> kafkaStream = messageStreams.get(topic).get(0); ConsumerIterator<byte[], byte[]> iterator = kafkaStream.iterator(); while(iterator.hasNext()){ byte[] message = iterator.next().message(); System.out.println("message is:"+new String(message)); } } private ConsumerConnector createConsumer(){ Properties properties = new Properties(); properties.put("zookeeper.connect", "XXX:2181"); properties.put("auto.offset.reset", "smallest");//读取旧数据 properties.put("group.id", "333fcdcd"); return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties)); } public static void main(String[] args) { new KafkaConsumer("testtest").start(); } }

连接到Kafka后Golang消费者延迟接收Kafka消息

<div class="post-text" itemprop="text"> <p><em>I'm new to Golang and Kafa so this might seem like a silly question.</em></p> <p>After my Kafka consumer first connects to the Kafka server, why is there a delay (~ 20 secs) between establishing connection to the Kafka server, and receiving the first message?</p> <p>It prints a message right before <code>consumer.Messages()</code> and print another message for each message received. The ~20 sec delay is between the first <code>fmt.Println</code> and second <code>fmt.Println</code>.</p> <pre><code>package main import ( "fmt" "github.com/Shopify/sarama" cluster "github.com/bsm/sarama-cluster" ) func main() { // Create the consumer and listen for new messages consumer := createConsumer() // Create a signal channel to know when we are done done := make(chan bool) // Start processing messages go func() { fmt.Println("Start consuming Kafka messages") for msg := range consumer.Messages() { s := string(msg.Value[:]) fmt.Println("Msg: ", s) } }() &lt;-done } func createConsumer() *cluster.Consumer { // Define our configuration to the cluster config := cluster.NewConfig() config.Consumer.Return.Errors = false config.Group.Return.Notifications = false config.Consumer.Offsets.Initial = sarama.OffsetOldest // Create the consumer brokers := []string{"127.0.0.1:9092"} topics := []string{"orders"} consumer, err := cluster.NewConsumer(brokers, "my-consumer-group", topics, config) if err != nil { log.Fatal("Unable to connect consumer to Kafka") } go handleErrors(consumer) go handleNotifications(consumer) return consumer } </code></pre> <p><strong>docker-compose.yml</strong></p> <pre><code>version: '2' services: zookeeper: image: "confluentinc/cp-zookeeper:5.0.1" hostname: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker-1: image: "confluentinc/cp-enterprise-kafka:5.0.1" hostname: broker-1 depends_on: - zookeeper ports: - "9092:9092" environment: KAFKA_BROKER_ID: 1 KAFKA_BROKER_RACK: rack-a KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1 KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://127.0.0.1:9092' KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter KAFKA_DELETE_TOPIC_ENABLE: "true" KAFKA_JMX_PORT: 9999 KAFKA_JMX_HOSTNAME: 'broker-1' KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092 CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181 CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1 CONFLUENT_METRICS_ENABLE: 'true' CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous' KAFKA_CREATE_TOPICS: "orders:1:1" </code></pre> </div>

Kafka消费者组丢失未提交的消息

<div class="post-text" itemprop="text"> <p>I am using consumer group with just one consumer, just one broker ( docker wurstmeister image ). It's decided in a code to commit offset or not - if code returns error then message is not commited. I need to ensure that system does not lose any message - even if that means retrying same msg forever ( for now ;) ). For testing this I have created simple handler which does not commit offset in case of 'error' string send as message to kafka. All other strings are commited. </p> <pre><code>kafka-console-producer --broker-list localhost:9092 --topic test &gt;this will be commited </code></pre> <p>Now running </p> <pre><code>kafka-run-class kafka.admin.ConsumerGroupCommand --bootstrap-server localhost:9092 --group michalgrupa --describe </code></pre> <p>returns</p> <pre><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID test 0 13 13 0 </code></pre> <p>so thats ok, there is no lag. Now we pass 'error' string to fake that something bad happened and message is not commited:</p> <pre><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID test 0 13 14 1 </code></pre> <p>Current offset stays at right position + there is 1 lagged message. Now if we pass correct message again offset will move on to 15:</p> <p><code>TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG test 0 15 15</code> </p> <p>and message number 14 will not be picked up ever again. Is it default behaviour? Do I need to trace last offset and load message by it+1 manually? I have set commit interval to 0 to hopefully not use any auto.commit mechanism.</p> <p>fetch/commit code:</p> <pre><code>go func() { for { ctx := context.Background() m, err := mr.brokerReader.FetchMessage(ctx) if err != nil { break } if err := msgFunc(m); err != nil { log.Errorf("# messaging # cannot commit a message: %v", err) continue } // commit message if no error if err := mr.brokerReader.CommitMessages(ctx, m); err != nil { // should we do something else to just logging not committed message? log.Errorf("cannot commit message [%s] %v/%v: %s = %s; with error: %v", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value), err) } } }() </code></pre> <p>reader configuration:</p> <pre><code>kafkaReader := kafka.NewReader(kafka.ReaderConfig{ Brokers: brokers, GroupID: groupID, Topic: topic, CommitInterval: 0, MinBytes: 10e3, MaxBytes: 10e6, }) </code></pre> <p>library used: <a href="https://github.com/segmentio/kafka-go" rel="nofollow noreferrer">https://github.com/segmentio/kafka-go</a></p> </div>

spring boot 1.5集成 kafka 消费者怎么自己确认消费

spring boot 1.5集成 kafka 消费者怎么自己确认消费 怎么使用@KafkaListener注解实现Acknowledgment,即消费者怎么自己提交游标

python消费kafka数据,为什么前面取几次都取不到?

python 消费kafka数据时,刚开始连接时为什么取不到数据? 代码如下: ``` # -*- coding:utf8 -*- from kafka import KafkaConsumer from kafka import TopicPartition import kafka import time # 测试kafka poll方法能拉取多少的记录 consumer = KafkaConsumer( bootstrap_servers=['192.168.13.202:9092'], group_id='group-1', auto_offset_reset='earliest', enable_auto_commit=False) consumer.subscribe('test') print ("t1",time.time()) while True: print("t2", time.time()) msg = consumer.poll(timeout_ms=100, max_records=5) # 从kafka获取消息 # print (len(msg)) for i in msg.values(): for k in i: print(k.offset, k.value) time.sleep(1) ``` 打印的结果却是 ``` t1 1567669170.438951 t2 1567669170.438951 t2 1567669171.8450315 t2 1567669172.945094 t2 1567669174.0471573 t2 1567669175.1472201 0 b'{"ast":"\xe7\x82\xb"}' 1 b'{"ast":"","dm":2}' 2 b'{"ast":"12"}' 3 b'{"ast":"sd"}' 4 b'{"ast":"12ds"}' t2 1567669176.1822793 ``` 为什么连接上kafka之后,会取5次才会取到数据?

kafka消费者迭代器卡死

配置kafka+zk在虚拟机上,自己在本机上的生产者可以正常发送消息,但是消费者在 ConsumerIterator<byte[], byte[]> iterator = stream.iterator(); 之后 iterator 的任何操作都直接卡死了,请问这个是怎么回事?在这个操作之前debug是可以看到变量内部的数值的,这个操作之后就不能看了,value全部清空了。 贴源码 ``` package com.weixinjia.recreation.queue.client; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; import kafka.message.MessageAndMetadata; public class ConsumerGroupExample extends Thread{ private final ConsumerConnector consumer; private final String topic; private ExecutorService executor; public ConsumerGroupExample(String a_zookeeper, String a_groupId, String a_topic) { consumer = kafka.consumer.Consumer.createJavaConsumerConnector( createConsumerConfig(a_zookeeper, a_groupId)); this.topic = a_topic; } public void shutdown() { if (consumer != null) consumer.shutdown(); if (executor != null) executor.shutdown(); try { if (!executor.awaitTermination(5000, TimeUnit.MILLISECONDS)) { System.out.println("Timed out waiting for consumer threads to shut down, exiting uncleanly"); } } catch (InterruptedException e) { System.out.println("Interrupted during shutdown, exiting uncleanly"); } } public void run(int a_numThreads) { Map<String, Integer> topicCountMap = new HashMap<String, Integer>(); topicCountMap.put(topic, new Integer(a_numThreads)); Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap); List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic); // now launch all the threads // // executor = Executors.newFixedThreadPool(a_numThreads); System.out.println(streams.size()); // now create an object to consume the messages // int threadNumber = 0; for (final KafkaStream<byte[], byte[]> stream : streams) { executor.submit(new ConsumerTest(stream, threadNumber)); threadNumber++; ConsumerIterator<byte[], byte[]> iterator = stream.iterator(); System.out.println(iterator); while(iterator.hasNext()){ MessageAndMetadata<byte[], byte[]> next = iterator.next(); byte[] message = next.message(); String string = new String(message); System.out.println(string); } } System.out.println("消息输出完成"); } private static ConsumerConfig createConsumerConfig(String a_zookeeper, String a_groupId) { Properties props = new Properties(); props.put("zookeeper.connect", a_zookeeper); props.put("group.id", a_groupId); props.put("zookeeper.session.timeout.ms", "1500"); props.put("zookeeper.sync.time.ms", "4000"); props.put("auto.commit.interval.ms", "1000"); props.put("fetch.message.max.bytes", "10240000"); props.put("auto.commit.enable", "true"); return new ConsumerConfig(props); } public static void main(String[] args) { String zooKeeper = "masterServce:2181"; String groupId = "group-1"; String topic = "test2"; int threads = 3; ConsumerGroupExample example = new ConsumerGroupExample(zooKeeper, groupId, topic); example.run(threads); try { Thread.sleep(10000); } catch (InterruptedException ie) { } example.shutdown(); } } ``` 消息在kafka那边直接用kafka-console-consumer.sh是可以查询到的

在docker容器里kafka生产者(java客户端)消息发送不出去!!

在我们项目实施过程,部署在docker容器里应用需要用kafka客户端发送不出消息,但是同样应用部署在物理机(该容器所在物理机)上,kafka客户端可以发送消息;然后我们在docker容器里的kafka客户端生产者增加调整两个参数linger.ms、batch.size,这样在容器里kafa客户端就可以发送。现在不知道在容器里和在物理机上对于linger.ms、batch.size这两个默认参数有什么区别?

kafka windows版启动出错,提示 命令语法不正确。 系统找不到指定的路径

如图,kafka启动的时候一直提示命令语法不正确。 系统找不到指定的路径 试过网上所说的很多方法,都没有见效,求大神! 千万别一句话丢过来,路径问题,这么说,我不知道怎么去找这个路径问题,怎么解决这个!谢谢了!求大家帮忙! ![图片说明](https://img-ask.csdn.net/upload/201705/24/1495607917_954511.png)

kafka通过consumer java api实现消费者,KafkaStream打印不出来数据

kafka2.2.0 通过consumer java api实现消费者,KafkaStream打印不出来数据 ``` package kafka; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import kafka.consumer.Consumer; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; public class KafkaConsumerTest extends Thread { //在linux环境运行正常 @Override public void run() { // TODO Auto-generated method stub String topic="powerTopic"; Properties pro=new Properties(); pro.put("zookeeper.connect", "10.2.2.61:2181,10.2.2.62:2181,10.2.2.63:2181"); pro.put("group.id", "test"); // pro.put("zookeeper.session.timeout.ms", "4000"); // pro.put("consumer.timeout.ms", "-1"); ConsumerConfig paramConsumerConfig=new ConsumerConfig(pro); ConsumerConnector cosumerConnector=Consumer.createJavaConsumerConnector(paramConsumerConfig); Map<String, Integer> paramMap=new HashMap<String, Integer>(); paramMap.put(topic,1); Map<String, List<KafkaStream<byte[], byte[]>>> messageStream=cosumerConnector.createMessageStreams(paramMap); KafkaStream<byte[], byte[]> kafkastream=messageStream.get(topic).get(0); // System.out.println(kafkastream.size()); System.out.println("hello"); ConsumerIterator<byte[], byte[]> iterator=kafkastream.iterator(); while(iterator.hasNext()){ // MessageAndMetadata<byte[], byte[]> message=iterator.next(); // String topic1=message.topic(); String msg=new String(iterator.next().message()); System.out.println(msg); } } public static void main(String[] args) { // TODO Auto-generated method stub new KafkaConsumerTest().start(); new MyProducer01().start(); } } ``` kafka环境在centos操作系统,在windows系统的eclipse运行程序,打印不出来数据,也不结束报错: ![图片说明](https://img-ask.csdn.net/upload/201912/20/1576807897_599340.jpg) 打包后在集群环境运行结果: ![图片说明](https://img-ask.csdn.net/upload/201912/20/1576808400_74063.jpg)

kafka多个消费者消费同一数据

kafka不同groupid下的消费者,消费同一topic下的某一条数据,为什么offset值不变? 只被消费了一次吗?

如何确保我的消费者仅按顺序处理kafka主题中的消息?

<div class="post-text" itemprop="text"> <p>I've never used kafka before. I have two test Go programs accessing a local kafka instance: a reader and a writer. I'm trying to tweak my producer, consumer, and kafka server settings to get a particular behavior.</p> <p>My writer:</p> <pre><code>package main import ( "fmt" "math/rand" "strconv" "time" "github.com/confluentinc/confluent-kafka-go/kafka" ) func main() { rand.Seed(time.Now().UnixNano()) topics := []string{ "policymanager-100", "policymanager-200", "policymanager-300", } progress := make(map[string]int) for _, t := range topics { progress[t] = 0 } producer, err := kafka.NewProducer(&amp;kafka.ConfigMap{ "bootstrap.servers": "localhost", "group.id": "0", }) if err != nil { panic(err) } defer producer.Close() fmt.Println("producing messages...") for i := 0; i &lt; 30; i++ { index := rand.Intn(len(topics)) topic := topics[index] num := progress[topic] num++ fmt.Printf("%s =&gt; %d ", topic, num) msg := &amp;kafka.Message{ Value: []byte(strconv.Itoa(num)), TopicPartition: kafka.TopicPartition{ Topic: &amp;topic, }, } err = producer.Produce(msg, nil) if err != nil { panic(err) } progress[topic] = num time.Sleep(time.Millisecond * 100) } fmt.Println("DONE") } </code></pre> <p>There are three topics that exist on my local kafka: policymanager-100, policymanager-200, policymanager-300. They each only have 1 partition to ensure all messages are sorted by the time kafka receives them. My writer will randomly pick one of those topics and issue a message consisting of a number that increments solely for that topic. When it's done running, I expect the queues to look something like this (topic names shortened for legibility):</p> <pre><code>100: 1 2 3 4 5 6 7 8 9 10 11 200: 1 2 3 4 5 6 7 300: 1 2 3 4 5 6 7 8 9 10 11 12 </code></pre> <p>So far so good. I'm trying to configure things so that any number of consumers can be spun up and consume these messages in order. By "in-order" I mean that no consumer should get message 2 for topic 100 until message 1 is COMPLETED (not just started). If message 1 for topic 100 is being worked on, consumers are free to consume from other topics that currently don't have a message being processed. If a message of a topic has been sent to a consumer, that entire topic should become "locked" until either a timeout assumes that the consumer failed or the consumer commits the message, then the topic is "unlocked" to have it's next message made available to be consumed.</p> <p>My reader:</p> <pre><code>package main import ( "fmt" "time" "github.com/confluentinc/confluent-kafka-go/kafka" ) func main() { count := 2 for i := 0; i &lt; count; i++ { go consumer(i + 1) } fmt.Println("cosuming...") // hold this thread open indefinitely select {} } func consumer(id int) { c, err := kafka.NewConsumer(&amp;kafka.ConfigMap{ "bootstrap.servers": "localhost", "group.id": "0", // strconv.Itoa(id), "enable.auto.commit": "false", }) if err != nil { panic(err) } c.SubscribeTopics([]string{`^policymanager-.+$`}, nil) for { msg, err := c.ReadMessage(-1) if err != nil { panic(err) } fmt.Printf("%d) Message on %s: %s ", id, msg.TopicPartition, string(msg.Value)) time.Sleep(time.Second) _, err = c.CommitMessage(msg) if err != nil { fmt.Printf("ERROR commiting: %+v ", err) } } } </code></pre> <p>From my current understanding, the way I'm likely to achieve this is by setting up my consumer properly. I've tried many different variations of this program. I've tried having all my goroutines share the same consumer. I've tried using a different <code>group.id</code> for each goroutine. None of these was the right configuration to get the behavior I'm after.</p> <p>What the posted code does is empty out one topic at a time. Despite having multiple goroutines, the process will read all of 100 then move to 200 then 300 and only one goroutine will actually do all the reading. When I let each goroutine have a different <code>group.id</code> then messages get read by multiple goroutines which I would like to prevent.</p> <p>My example consumer is simply breaking things up with goroutines but when I begin working this project into my use case at work, I'll need this to work across multiple kubernetes instances that won't be talking to each other so using anything that interacts between goroutines won't work as soon as there are 2 instances on 2 kubes. That's why I'm hoping to make kafka do the gatekeeping I want.</p> </div>

Spring集成kafka,消费者运行时内存占用会一直增长?

本人用Spring集成kafka消费者,发布运行时内存占用会一直升高,最后程序挂掉。请各位大神看看,提供解决方法 以下是我的配置文件 ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978342_260014.png) 程序运行两天后占用内存达到了1.4G,我用jmap导出程序占用文件,使用eclipsemat分析 ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978543_231966.png) ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978554_565464.png) 发现是这个org.springframework.kafka.listener.KafkaMessageListenerContainer这个类里面 ![图片说明](https://img-ask.csdn.net/upload/201810/31/1540978671_113331.png) 这个里面的LinkedBlockingQueue这个队列像是没释放一样。不知道是不是还需要配置什么东西,,一直找不到什么方法来解决。

kafka同一消费者组内的不同消费者可以订阅不同主题吗

假如有一个消费者组group,消费者组内有两个消费者c1、c2, c1订阅topic1,c2订阅topic2,那么结果是怎样的? 1、是两个消费者分别消费自己的主题; 2、还是组内这两个消费者都订阅了topic1和topic2; 3、还是报错?

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

获取Linux下Ftp目录树并逐步绑定到treeview

在linux下抓取目录树,双击后获取该节点子节点(逐步生成)。另外有两个类,一个是windows下的(一次性获取目录树),一个是linux下的(足部获取目录树)

NS网络模拟和协议仿真源代码

NS网络模拟和协议仿真源代码,包含代码说明及协议分析

简单的NS3网络模拟仿真(计算机网络作业)

简单的NS3网络模拟仿真,内附有PPT演示。论文评述。以及简单的安装教程。

手把手实现Java图书管理系统(附源码)

【超实用课程内容】 本课程演示的是一套基于Java的SSM框架实现的图书管理系统,主要针对计算机相关专业的正在做毕设的学生与需要项目实战练习的java人群。详细介绍了图书管理系统的实现,包括:环境搭建、系统业务、技术实现、项目运行、功能演示、系统扩展等,以通俗易懂的方式,手把手的带你从零开始运行本套图书管理系统,该项目附带全部源码可作为毕设使用。 【课程如何观看?】 PC端:https://edu.csdn.net/course/detail/27513 移动端:CSDN 学院APP(注意不是CSDN APP哦) 本课程为录播课,课程2年有效观看时长,大家可以抓紧时间学习后一起讨论哦~ 【学员专享增值服务】 源码开放 课件、课程案例代码完全开放给你,你可以根据所学知识,自行修改、优化

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

150讲轻松搞定Python网络爬虫

【为什么学爬虫?】 &nbsp; &nbsp; &nbsp; &nbsp;1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到! &nbsp; &nbsp; &nbsp; &nbsp;2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是: 网络请求:模拟浏览器的行为从网上抓取数据。 数据解析:将请求下来的数据进行过滤,提取我们想要的数据。 数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。 那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是: 爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。 Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。 通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 &nbsp; 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求! 【课程服务】 专属付费社群+每周三讨论会+1v1答疑

cuda开发cutilDLL

包括cutil32.dll、cutil32D.dll、cutil32.lib、cutil32D.lib,以及附带的glew32.lib/freeglut.lib

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

Tensorflow与python3.7适配版本

tensorflow与python3.7匹配的最新库,更新了python3.7以后可以自行下载,或者去国外python的扩展包下载界面自行下载。

4小时玩转微信小程序——基础入门与微信支付实战

这是一个门针对零基础学员学习微信小程序开发的视频教学课程。课程采用腾讯官方文档作为教程的唯一技术资料来源。杜绝网络上质量良莠不齐的资料给学员学习带来的障碍。 视频课程按照开发工具的下载、安装、使用、程序结构、视图层、逻辑层、微信小程序等几个部分组织课程,详细讲解整个小程序的开发过程

专为程序员设计的数学课

<p> 限时福利限时福利,<span>15000+程序员的选择!</span> </p> <p> 购课后添加学习助手(微信号:csdn590),按提示消息领取编程大礼包!并获取讲师答疑服务! </p> <p> <br> </p> <p> 套餐中一共包含5门程序员必学的数学课程(共47讲) </p> <p> 课程1:《零基础入门微积分》 </p> <p> 课程2:《数理统计与概率论》 </p> <p> 课程3:《代码学习线性代数》 </p> <p> 课程4:《数据处理的最优化》 </p> <p> 课程5:《马尔可夫随机过程》 </p> <p> <br> </p> <p> 哪些人适合学习这门课程? </p> <p> 1)大学生,平时只学习了数学理论,并未接触如何应用数学解决编程问题; </p> <p> 2)对算法、数据结构掌握程度薄弱的人,数学可以让你更好的理解算法、数据结构原理及应用; </p> <p> 3)看不懂大牛代码设计思想的人,因为所有的程序设计底层逻辑都是数学; </p> <p> 4)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; </p> <p> 5)想修炼更好的编程内功,在遇到问题时可以灵活的应用数学思维解决问题。 </p> <p> <br> </p> <p> 在这门「专为程序员设计的数学课」系列课中,我们保证你能收获到这些:<br> <br> <span> </span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">①价值300元编程课程大礼包</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">②应用数学优化代码的实操方法</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">③数学理论在编程实战中的应用</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">④程序员必学的5大数学知识</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">⑤人工智能领域必修数学课</span> </p> <p> <br> 备注:此课程只讲程序员所需要的数学,即使你数学基础薄弱,也能听懂,只需要初中的数学知识就足矣。<br> <br> 如何听课? </p> <p> 1、登录CSDN学院 APP 在我的课程中进行学习; </p> <p> 2、登录CSDN学院官网。 </p> <p> <br> </p> <p> 购课后如何领取免费赠送的编程大礼包和加入答疑群? </p> <p> 购课后,添加助教微信:<span> csdn590</span>,按提示领取编程大礼包,或观看付费视频的第一节内容扫码进群答疑交流! </p> <p> <img src="https://img-bss.csdn.net/201912251155398753.jpg" alt=""> </p>

实现简单的文件系统

实验内容: 通过对具体的文件存储空间的管理、文件的物理结构、目录结构和文件操作的实现,加深对文件系统内部功能和实现过程的理解。 要求: 1.在内存中开辟一个虚拟磁盘空间作为文件存储器,在其上实现一个简

机器学习初学者必会的案例精讲

通过六个实际的编码项目,带领同学入门人工智能。这些项目涉及机器学习(回归,分类,聚类),深度学习(神经网络),底层数学算法,Weka数据挖掘,利用Git开源项目实战等。

四分之一悬架模型simulink.7z

首先建立了四分之一车辆悬架系统的数学模型,应用MATLAB/Simulink软件建立该系统的仿真模型,并输入路面激励为随机激励,控制不同的悬架刚度和阻尼,选用最优的参数得到车辆悬架的振动加速度变化曲线

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

C++语言基础视频教程

C++语言基础视频培训课程:本课与主讲者在大学开出的程序设计课程直接对接,准确把握知识点,注重教学视频与实践体系的结合,帮助初学者有效学习。本教程详细介绍C++语言中的封装、数据隐藏、继承、多态的实现等入门知识;主要包括类的声明、对象定义、构造函数和析构函数、运算符重载、继承和派生、多态性实现等。 课程需要有C语言程序设计的基础(可以利用本人开出的《C语言与程序设计》系列课学习)。学习者能够通过实践的方式,学会利用C++语言解决问题,具备进一步学习利用C++开发应用程序的基础。

Java8零基础入门视频教程

这门课程基于主流的java8平台,由浅入深的详细讲解了java SE的开发技术,可以使java方向的入门学员,快速扎实的掌握java开发技术!

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

C/C++学习指南全套教程

C/C++学习的全套教程,从基本语法,基本原理,到界面开发、网络开发、Linux开发、安全算法,应用尽用。由毕业于清华大学的业内人士执课,为C/C++编程爱好者的教程。

pokemmo的资源

pokemmo必须的4个rom 分别为绿宝石 火红 心金 黑白 还有汉化补丁 资源不错哦 记得下载

test_head.py

本文件主要是针对使用dlib的imglab标注工具标记的目标检测框和关键点检测而生成的xml文件, 转换为coco数据集格式.

Java面试史上最全的JAVA专业术语面试100问 (前1-50)

前言: 说在前面, 面试题是根据一些朋友去面试提供的,再就是从网上整理了一些。 先更新50道,下一波吧后面的也更出来。 求赞求关注!! 废话也不多说,现在就来看看有哪些面试题 1、面向对象的特点有哪些? 抽象、继承、封装、多态。 2、接口和抽象类有什么联系和区别? 3、重载和重写有什么区别? 4、java有哪些基本数据类型? 5、数组有没有length()方法?String有没有length()方法? 数组没有length()方法,它有length属性。 String有length()方法。 集合求长度用

2019 AI开发者大会

2019 AI开发者大会(AI ProCon 2019)是由中国IT社区CSDN主办的AI技术与产业年度盛会。多年经验淬炼,如今蓄势待发:2019年9月6-7日,大会将有近百位中美顶尖AI专家、知名企业代表以及千余名AI开发者齐聚北京,进行技术解读和产业论证。我们不空谈口号,只谈技术,诚挚邀请AI业内人士一起共铸人工智能新篇章!

linux“开发工具三剑客”速成攻略

工欲善其事,必先利其器。Vim+Git+Makefile是Linux环境下嵌入式开发常用的工具。本专题主要面向初次接触Linux的新手,熟练掌握工作中常用的工具,在以后的学习和工作中提高效率。

DirectX修复工具V4.0增强版

DirectX修复工具(DirectX Repair)是一款系统级工具软件,简便易用。本程序为绿色版,无需安装,可直接运行。 本程序的主要功能是检测当前系统的DirectX状态,如果发现异常则进行修复

20行代码教你用python给证件照换底色

20行代码教你用python给证件照换底色

2019 Python开发者日-培训

本次活动将秉承“只讲技术,拒绝空谈”的理念,邀请十余位身处一线的Python技术专家,重点围绕Web开发、自动化运维、数据分析、人工智能等技术模块,分享真实生产环境中使用Python应对IT挑战的真知灼见。此外,针对不同层次的开发者,大会还安排了深度培训实操环节,为开发者们带来更多深度实战的机会。

我以为我对Mysql事务很熟,直到我遇到了阿里面试官

太惨了,面试又被吊打

相关热词 c#设计思想 c#正则表达式 转换 c#form复制 c#写web c# 柱形图 c# wcf 服务库 c#应用程序管理器 c#数组如何赋值给数组 c#序列化应用目的博客园 c# 设置当前标注样式
立即提问