kafka 集成 kerberos ,启动kafka报错

kafka 使用kerberos协议的时候,启动kakfa的时候报zookeeper校验不通过。
错误信息如下:图片说明

kerberos的用户密钥:图片说明

kerberos的etc/krb5.conf配置信息:[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = EXAMPLE.COM
default_tkt_enctypes = arcfour-hmac-md5
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true

[realms]
EXAMPLE.COM = {
kdc = 192.168.1.41
admin_server = 192.168.1.41
}

[domain_realm]
kafka = EXAMPLE.COM
zookeeper = EXAMPLE.COM
weiwei = EXAMPLE.COM
192.168.1.41 = EXAMPLE.COM
127.0.0.1 = EXAMPLE.COM

kerberos 的var/kerberos/krb5kdc/kdc.conf的配置信息:
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88

[realms]
EXAMPLE.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
max_renewable_life = 7d
supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}

kafka的kafka_server_jaas.conf的配置信息:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/var/kerberos/krb5kdc/kafka.keytab"
principal="kafka/weiwei@EXAMPLE.COM";
};

Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/var/kerberos/krb5kdc/kafka.keytab"
principal="zookeeper/192.168.1.41@EXAMPLE.COM";
};

zookeeper_jaas.conf的配置信息:
Server{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/var/kerberos/krb5kdc/kafka.keytab"
principal="zookeeper/192.168.1.41@EXAMPLE.COM";
};

zookeeper.properties的新增配置信息:

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

server.properties 新增的配置信息:
advertised.host.name=192.168.1.41
advertised.listeners=SASL_PLAINTEXT://192.168.1.41:9092
listeners=SASL_PLAINTEXT://192.168.1.41:9092
#listeners=PLAINTEXT://127.0.0.1:9093
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka

zookeeper-server-start.sh 新增的配置信息
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/home/shubei/Downloads/kafka_2.12-1.0.0/config/zookeeper_jaas.conf'

kafka-server-start.sh 新增的配置信息:
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/home/shubei/Downloads/kafka_2.12-1.0.0/config/kafka_server_jaas.conf'

配置信息基本是这样,快过年了,小弟在线求救,再预祝大侠们新年快乐。












Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
springBoot整合kafka启动报错

报错日志如下,groupId=MyGroupId时可以正常连接,但是groupId=etl时就连接不到报错了,麻烦大神看看,如果需要别的相关文件窝在提供,在线等,给跪了 ``` _ _ _ _ _ _ __ _ ___ ____ | | | (_) | | | | (_)/ _| | | |__ \ |___ \ | |__| |_ | | | |_ __ _| |_ __ _ ___| |_ ) | __) | | __ | | | | | | '_ \| | _/ _` / __| __| / / |__ < | | | | | | |__| | | | | | || (_| \__ \ |_ / /_ _ ___) | |_| |_|_| \____/|_| |_|_|_| \__,_|___/\__| |____(_)____/ 2020-02-21 23:53:40.414 mallSearch [main] INFO c.u.j.c.EnableEncryptablePropertiesConfiguration - Bootstraping jasypt-string-boot auto configuration in context: application-1 2020-02-21 23:53:40.414 mallSearch [main] INFO c.c.mall.mallsearch.ReportManagerApplication - The following profiles are active: dev 2020-02-21 23:53:42.148 mallSearch [main] WARN org.mybatis.spring.mapper.ClassPathMapperScanner - No MyBatis mapper was found in '[cn.chinaunicom.sdsi.**.entity, cn.chinaunicom.mall.**.entity]' package. Please check your configuration. 2020-02-21 23:53:42.476 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode! 2020-02-21 23:53:42.476 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data repositories in DEFAULT mode. 2020-02-21 23:53:42.695 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 219ms. Found 9 repository interfaces. 2020-02-21 23:53:42.710 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode! 2020-02-21 23:53:42.710 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Bootstrapping Spring Data repositories in DEFAULT mode. 2020-02-21 23:53:42.804 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.AttrCategoryDocumentRepository. 2020-02-21 23:53:42.804 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.AttrDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.AttrValueDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.BrandDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.CategorytreeDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsPoolComSpuDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsSkuDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsSpuCategorytreeDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.chinaunicom.mall.mallsearch.repository.GoodsSpuDocumentRepository. 2020-02-21 23:53:42.819 mallSearch [main] INFO o.s.d.r.config.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 31ms. Found 0 repository interfaces. 2020-02-21 23:53:43.522 mallSearch [main] INFO o.springframework.cloud.context.scope.GenericScope - BeanFactory id=f1ad9868-69a4-3087-8cf7-8a78137ec329 2020-02-21 23:53:43.554 mallSearch [main] INFO c.u.j.c.EnableEncryptablePropertiesBeanFactoryPostProcessor - Post-processing PropertySource instances 2020-02-21 23:53:43.601 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource configurationProperties [org.springframework.boot.context.properties.source.ConfigurationPropertySourcesPropertySource] to AOP Proxy 2020-02-21 23:53:43.601 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource servletConfigInitParams [org.springframework.core.env.PropertySource$StubPropertySource] to EncryptablePropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource servletContextInitParams [org.springframework.core.env.PropertySource$StubPropertySource] to EncryptablePropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource systemProperties [org.springframework.core.env.MapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource systemEnvironment [org.springframework.boot.env.SystemEnvironmentPropertySourceEnvironmentPostProcessor$OriginAwareSystemEnvironmentPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource random [org.springframework.boot.env.RandomValuePropertySource] to EncryptablePropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource applicationConfig: [classpath:/application-dev.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource applicationConfig: [classpath:/config/application.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource applicationConfig: [classpath:/application.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource springCloudClientHostInfo [org.springframework.core.env.MapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.632 mallSearch [main] INFO c.u.j.EncryptablePropertySourceConverter - Converting PropertySource defaultProperties [org.springframework.core.env.MapPropertySource] to EncryptableMapPropertySourceWrapper 2020-02-21 23:53:43.694 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$46cfe68c] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:43.710 mallSearch [main] INFO c.u.j.filter.DefaultLazyPropertyFilter - Property Filter custom Bean not found with name 'encryptablePropertyFilter'. Initializing Default Property Filter 2020-02-21 23:53:43.850 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration' of type [org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration$$EnhancerBySpringCGLIB$$bcb9d43] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.069 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'objectPostProcessor' of type [org.springframework.security.config.annotation.configuration.AutowireBeanFactoryObjectPostProcessor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.085 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler@70f5f59d' of type [org.springframework.security.oauth2.provider.expression.OAuth2MethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.085 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration$$EnhancerBySpringCGLIB$$30a03ff5] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.132 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.163 mallSearch [main] INFO c.u.j.resolver.DefaultLazyPropertyResolver - Property Resolver custom Bean not found with name 'encryptablePropertyResolver'. Initializing Default Property Resolver 2020-02-21 23:53:44.163 mallSearch [main] INFO c.u.j.detector.DefaultLazyPropertyDetector - Property Detector custom Bean not found with name 'encryptablePropertyDetector'. Initializing Default Property Detector 2020-02-21 23:53:44.179 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'spring.cache-org.springframework.boot.autoconfigure.cache.CacheProperties' of type [org.springframework.boot.autoconfigure.cache.CacheProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.194 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'unifastRedisCacheConfig' of type [cn.chinaunicom.sdsi.security.cache.config.UnifastRedisCacheConfig$$EnhancerBySpringCGLIB$$7185313] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:44.335 mallSearch [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$8f37d806] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2020-02-21 23:53:45.006 mallSearch [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 9011 (http) 2020-02-21 23:53:45.022 mallSearch [main] INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-9011"] 2020-02-21 23:53:45.038 mallSearch [main] INFO org.apache.catalina.core.StandardService - Starting service [Tomcat] 2020-02-21 23:53:45.038 mallSearch [main] INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.16] 2020-02-21 23:53:45.053 mallSearch [main] INFO org.apache.catalina.core.AprLifecycleListener - The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [C:\Program Files\Java\jdk1.8.0_144\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Program Files (x86)\Intel\iCLS Client\;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Intel\iCLS Client\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\Java\jdk1.8.0_144\bin;C:\Program Files\Git\cmd;C:\Program Files\apache-maven-3.2.2\bin;C:\Program Files\Mysql\bin;C:\Program Files\MySQL\MySQL Utilities 1.6\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\TortoiseSVN\bin;D:\工具\curl-7.59.0-win64-mingw\bin;C:\WINDOWS\System32\OpenSSH\;C:\Windows\WinSxS\amd64_microsoft-windows-telnet-client_31bf3856ad364e35_10.0.17134.1_none_9db21dbc8e34d070;C:\Program Files\nodejs\;D:\nginx-1.13.7;C:\Program Files\erl10.4\bin;D:\工具\rabbitmq_server-3.7.15\sbin;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\TortoiseGit\bin;C:\Users\xiaolei\AppData\Local\Microsoft\WindowsApps;;C:\Users\xiaolei\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\xiaolei\AppData\Roaming\npm;.] 2020-02-21 23:53:45.350 mallSearch [main] INFO o.a.c.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext 2020-02-21 23:53:45.350 mallSearch [main] INFO org.springframework.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 4905 ms 2020-02-21 23:53:47.034 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - no modules loaded 2020-02-21 23:53:47.036 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin] 2020-02-21 23:53:47.036 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.join.ParentJoinPlugin] 2020-02-21 23:53:47.039 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.percolator.PercolatorPlugin] 2020-02-21 23:53:47.039 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.script.mustache.MustachePlugin] 2020-02-21 23:53:47.039 mallSearch [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.transport.Netty4Plugin] 2020-02-21 23:53:49.084 mallSearch [main] INFO o.s.d.e.client.TransportClientFactoryBean - Adding transport node : 10.236.6.52:9200 2020-02-21 23:54:20.425 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:20.940 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:20.971 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.034 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.065 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.112 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.315 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.362 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:21.643 mallSearch [main] INFO org.redisson.Version - Redisson 3.12.0 2020-02-21 23:54:21.909 mallSearch [redisson-netty-4-24] INFO o.r.connection.pool.MasterPubSubConnectionPool - 1 connections initialized for 10.236.6.54/10.236.6.54:6379 2020-02-21 23:54:21.956 mallSearch [redisson-netty-4-28] INFO org.redisson.connection.pool.MasterConnectionPool - 20 connections initialized for 10.236.6.54/10.236.6.54:6379 2020-02-21 23:54:22.690 mallSearch [main] ERROR o.s.d.e.r.support.AbstractElasticsearchRepository - failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yZBpr1VZQfm3KgYuKq0eRA}{10.236.6.52}{10.236.6.52:9200}] 2020-02-21 23:54:22.971 mallSearch [main] INFO s.d.s.w.PropertySourcedRequestMappingHandlerMapping - Mapped URL path [/v2/api-docs] onto method [public org.springframework.http.ResponseEntity<springfox.documentation.spring.web.json.Json> springfox.documentation.swagger2.web.Swagger2Controller.getDocumentation(java.lang.String,javax.servlet.http.HttpServletRequest)] 2020-02-21 23:54:23.408 mallSearch [main] INFO o.s.security.web.DefaultSecurityFilterChain - Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2408ca4c, org.springframework.security.web.context.SecurityContextPersistenceFilter@29509774, org.springframework.security.web.header.HeaderWriterFilter@3741a170, org.springframework.security.web.authentication.logout.LogoutFilter@1988e095, org.springframework.security.oauth2.provider.authentication.OAuth2AuthenticationProcessingFilter@4870d2e1, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@9198fe3, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4dfe14d4, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@2f7e2481, org.springframework.security.web.session.SessionManagementFilter@26c24e5, org.springframework.security.web.access.ExceptionTranslationFilter@4a467f08, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@5de15ce1] _____ __________ __________________ _______ ________ ______________ __ / / /___ __ \___ ____/___ __ \__ __ \___ __ \___ __/__|__ \ _ / / / __ /_/ /__ __/ __ /_/ /_ / / /__ /_/ /__ / ____/ / / /_/ / _ _, _/ _ /___ _ ____/ / /_/ / _ _, _/ _ / _ __/ \____/ /_/ |_| /_____/ /_/ \____/ /_/ |_| /_/ /____/ ........................................................................................................ . uReport, is a Chinese style report engine licensed under the Apache License 2.0, . . which is opensource, easy to use,high-performance, with browser-based-designer. . ........................................................................................................ 2020-02-21 23:54:25.330 mallSearch [main] WARN com.netflix.config.sources.URLConfigurationSource - No URLs will be polled as dynamic configuration sources. 2020-02-21 23:54:25.330 mallSearch [main] INFO com.netflix.config.sources.URLConfigurationSource - To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath. 2020-02-21 23:54:25.345 mallSearch [main] WARN com.netflix.config.sources.URLConfigurationSource - No URLs will be polled as dynamic configuration sources. 2020-02-21 23:54:25.345 mallSearch [main] INFO com.netflix.config.sources.URLConfigurationSource - To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath. 2020-02-21 23:54:25.720 mallSearch [main] INFO o.s.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'applicationTaskExecutor' 2020-02-21 23:54:27.095 mallSearch [main] WARN o.s.b.a.freemarker.FreeMarkerAutoConfiguration - Cannot find template location(s): [classpath:/templates/] (please add some templates, check your FreeMarker configuration, or set spring.freemarker.checkTemplateLocation=false) 2020-02-21 23:54:29.282 mallSearch [main] INFO o.s.cloud.netflix.eureka.InstanceInfoFactory - Setting initial instance status as: STARTING 2020-02-21 23:54:29.360 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Initializing Eureka in region us-east-1 2020-02-21 23:54:29.438 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using JSON encoding codec LegacyJacksonJson 2020-02-21 23:54:29.438 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using JSON decoding codec LegacyJacksonJson 2020-02-21 23:54:29.610 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using XML encoding codec XStreamXml 2020-02-21 23:54:29.610 mallSearch [main] INFO c.n.discovery.provider.DiscoveryJerseyProvider - Using XML decoding codec XStreamXml 2020-02-21 23:54:29.954 mallSearch [main] INFO c.n.d.shared.resolver.aws.ConfigClusterResolver - Resolving eureka endpoints via configuration 2020-02-21 23:54:30.266 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Disable delta property : false 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Single vip registry refresh property : null 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Force full registry fetch : false 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Application is null : false 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Registered Applications size is zero : true 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Application version is -1: true 2020-02-21 23:54:30.375 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Getting all instance registry info from the eureka server 2020-02-21 23:54:30.828 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - The response status is 200 2020-02-21 23:54:30.844 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Starting heartbeat executor: renew interval is: 30 2020-02-21 23:54:30.844 mallSearch [main] INFO com.netflix.discovery.InstanceInfoReplicator - InstanceInfoReplicator onDemand update allowed rate per min is 4 2020-02-21 23:54:30.844 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Discovery Client initialized at timestamp 1582300470844 with initial instances count: 8 2020-02-21 23:54:30.875 mallSearch [main] INFO o.s.c.n.e.serviceregistry.EurekaServiceRegistry - Registering application reportmanager with eureka with status UP 2020-02-21 23:54:30.875 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Saw local status change event StatusChangeEvent [timestamp=1582300470875, current=UP, previous=STARTING] 2020-02-21 23:54:30.907 mallSearch [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 1000 auto.offset.reset = earliest bootstrap.servers = [10.236.6.52:9092] check.crcs = true client.id = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = myGroupId heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2020-02-21 23:54:31.016 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.1 2020-02-21 23:54:31.016 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fa14705e51bd2ce5 2020-02-21 23:54:31.313 mallSearch [DiscoveryClient-InstanceInfoReplicator-0] INFO com.netflix.discovery.DiscoveryClient - DiscoveryClient_REPORTMANAGER/192.168.62.1:9011: registering service... 2020-02-21 23:54:31.547 mallSearch [DiscoveryClient-InstanceInfoReplicator-0] INFO com.netflix.discovery.DiscoveryClient - DiscoveryClient_REPORTMANAGER/192.168.62.1:9011 - registration status: 204 2020-02-21 23:54:31.656 mallSearch [main] INFO org.apache.kafka.clients.Metadata - Cluster ID: rgWO07ohQH6Rn35woczhUQ 2020-02-21 23:54:31.719 mallSearch [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 1000 auto.offset.reset = earliest bootstrap.servers = [10.236.6.52:9092] check.crcs = true client.id = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = myGroupId heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2020-02-21 23:54:31.719 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.1 2020-02-21 23:54:31.719 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fa14705e51bd2ce5 2020-02-21 23:54:31.719 mallSearch [main] INFO o.s.scheduling.concurrent.ThreadPoolTaskScheduler - Initializing ExecutorService 2020-02-21 23:54:31.734 mallSearch [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [127.0.0.1:9092] check.crcs = true client.id = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = etl heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 1000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 180000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 120000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2020-02-21 23:54:31.734 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.1 2020-02-21 23:54:31.750 mallSearch [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fa14705e51bd2ce5 2020-02-21 23:54:32.016 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO org.apache.kafka.clients.Metadata - Cluster ID: rgWO07ohQH6Rn35woczhUQ 2020-02-21 23:54:32.047 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Discovered group coordinator 10.236.6.52:9092 (id: 2147483646 rack: null) 2020-02-21 23:54:32.063 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Revoking previously assigned partitions [] 2020-02-21 23:54:32.063 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.s.kafka.listener.KafkaMessageListenerContainer - partitions revoked: [] 2020-02-21 23:54:32.063 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] (Re-)joining group 2020-02-21 23:54:32.391 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Successfully joined group with generation 21 2020-02-21 23:54:32.406 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.a.k.c.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-2, groupId=myGroupId] Setting newly assigned partitions [es-mall-brand-update-0] 2020-02-21 23:54:32.625 mallSearch [org.springframework.kafka.KafkaListenerEndpointContainer#6-0-C-1] INFO o.s.kafka.listener.KafkaMessageListenerContainer - partitions assigned: [es-mall-brand-update-0] 2020-02-21 23:54:32.828 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:33.969 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:35.219 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:36.468 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:37.937 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:39.764 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:41.904 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:44.185 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:46.372 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:48.309 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:50.231 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:52.152 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:54.370 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:56.495 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:54:58.416 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:00.588 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:02.602 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:04.540 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:06.570 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:08.617 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:10.507 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:12.459 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:14.506 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:16.631 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:18.802 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:20.708 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:22.848 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:24.988 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:27.113 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:29.346 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:31.611 mallSearch [main] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-3, groupId=etl] Connection to node -1 could not be established. Broker may not be available. 2020-02-21 23:55:31.750 mallSearch [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 2020-02-21 23:55:31.782 mallSearch [main] INFO o.s.scheduling.concurrent.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor' 2020-02-21 23:55:32.516 mallSearch [main] INFO o.s.jmx.export.annotation.AnnotationMBeanExporter - Could not unregister MBean [com.github.tobato.fastdfs.conn:name=fdfsConnectionPool,type=FdfsConnectionPool] as said MBean is not registered (perhaps already unregistered by an external process) 2020-02-21 23:56:00.384 mallSearch [main] INFO com.netflix.discovery.DiscoveryClient - Shutting down DiscoveryClient ... 2020-02-21 23:56:00.431 mallSearch [main] WARN o.s.c.annotation.CommonAnnotationBeanPostProcessor - Destroy method on bean with name 'scopedTarget.eurekaClient' threw an exception: org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'eurekaInstanceConfigBean': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!) 2020-02-21 23:56:00.431 mallSearch [main] INFO org.apache.catalina.core.StandardService - Stopping service [Tomcat] 2020-02-21 23:56:00.462 mallSearch [main] INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener - Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2020-02-21 23:56:00.478 mallSearch [main] ERROR org.springframework.boot.SpringApplication - Application run failed org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185) at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53) at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360) at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158) at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122) at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:893) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:163) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) at org.springframework.boot.SpringApplication.run(SpringApplication.java:316) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248) at cn.chinaunicom.mall.mallsearch.ReportManagerApplication.main(ReportManagerApplication.java:85) Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 2020-02-21 23:56:01.634 mallSearch [DiscoveryClient-InstanceInfoReplicator-0] WARN com.netflix.discovery.InstanceInfoReplicator - There was a problem with the instance info replicator org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'eurekaInstanceConfigBean' defined in class path resource [org/springframework/cloud/netflix/eureka/EurekaClientAutoConfiguration.class]: Unsatisfied dependency expressed through method 'eurekaInstanceConfigBean' parameter 0; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'inetUtilsProperties': Could not bind properties to 'InetUtilsProperties' : prefix=spring.cloud.inetutils, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is java.lang.IllegalStateException: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@31c2affc has not been refreshed yet at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:509) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1305) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1144) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) at org.springframework.beans.factory.support.ConstructorResolver.resolvePreparedArguments(ConstructorResolver.java:804) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:430) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1305) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1144) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$1(AbstractBeanFactory.java:356) at org.springframework.cloud.context.scope.GenericScope$BeanLifecycleWrapper.getBean(GenericScope.java:390) at org.springframework.cloud.context.scope.GenericScope.get(GenericScope.java:184) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:353) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:672) at com.netflix.appinfo.ApplicationInfoManager$$EnhancerBySpringCGLIB$$918f0f04.refreshDataCenterInfoIfRequired(<generated>) at com.netflix.discovery.DiscoveryClient.refreshInstanceInfo(DiscoveryClient.java:1377) at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:117) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) at java.util.concurrent.FutureTask.run(FutureTask.java) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'inetUtilsProperties': Could not bind properties to 'InetUtilsProperties' : prefix=spring.cloud.inetutils, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is java.lang.IllegalStateException: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@31c2affc has not been refreshed yet at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.bind(ConfigurationPropertiesBindingPostProcessor.java:110) at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.postProcessBeforeInitialization(ConfigurationPropertiesBindingPostProcessor.java:93) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:414) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1754) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) at org.springframework.beans.factory.support.ConstructorResolver.resolvePreparedArguments(ConstructorResolver.java:804) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:430) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1305) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1144) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1247) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760) ... 37 common frames omitted Caused by: java.lang.IllegalStateException: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@31c2affc has not been refreshed yet at org.springframework.context.support.AbstractApplicationContext.assertBeanFactoryActive(AbstractApplicationContext.java:1092) at org.springframework.context.support.AbstractApplicationContext.getBeanProvider(AbstractApplicationContext.java:1134) at org.springframework.boot.context.properties.ConfigurationPropertiesBinder.getBindHandlerAdvisors(ConfigurationPropertiesBinder.java:138) at org.springframework.boot.context.properties.ConfigurationPropertiesBinder.getBindHandler(ConfigurationPropertiesBinder.java:130) at org.springframework.boot.context.properties.ConfigurationPropertiesBinder.bind(ConfigurationPropertiesBinder.java:82) at org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor.bind(ConfigurationPropertiesBindingPostProcessor.java:107) ... 65 common frames omitted Disconnected from the target VM, address: '127.0.0.1:62410', transport: 'socket' ```

kafka1.0.0的client,生产者生产数据失败

配置: ``` Properties props = new Properties(); //broker地址 props.put("bootstrap.servers", "39.108.61.252:9092,39.108.61.252:9093,39.108.61.252:9094"); //请求时候需要验证 props.put("acks", "0"); //请求失败时候需要重试 props.put("retries", 1); //生产者就会尝试将记录组合成一个batch的请求。 这有助于客户端和服务器的性能。不能大于此默认值,否则浪费内存,反而降低吞吐量 //props.put("batch.size", 16384); //汇聚一定时间内的记录一起发出 // props.put("linger.ms", 50); //内存缓存区大小 props.put("buffer.memory", 33554432); //指定消息key序列化方式 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //指定消息本身的序列化方式 props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<>(props); ``` 生产数据 没有Thread.sleep()就不能成功发送数据!,有了就可已在消费者端接受到数据。若把Thread.sleep()删除,在生产末尾加上close()方法也能成功生产 ``` for (int i = 0; i < 10; i++) { try { Thread.sleep(50); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } kafkaProducer.send(new ProducerRecord<>("topic_user_general_info_update", "simpleKey", "value-"+i)); } ``` 困扰很久,不知道配置还是哪里问题。

如何在Golang应用中使用Kerberos建立连接?

<div class="post-text" itemprop="text"> <p>I have task to create <code>SSO</code> (single sign-on) in <code>Golang</code> application with the help of <code>Kerberos</code> and <code>Active Directory</code>. In other words, if an employee of the company makes a request for a specific URL, the service must return information about him.</p> <p>I decided to use <a href="https://github.com/jcmturner/gokrb5" rel="nofollow noreferrer">gokrb5</a> library.</p> <p>What I have done so far:</p> <p>1) <code>SPN</code> name for the service was created in Active Directory.</p> <p>2) <code>krb5.keytab</code> file for the service was created.</p> <p>3) Active Directory and Kerberos server located on remote Windows server.</p> <p>4) Golang application would be in Linux Docker container.</p> <p>5) I install <code>Kerberos</code> client to Docker container.</p> <p>6) I put <code>krb5.keytab</code> file to <code>etc</code> folder of Docker container.</p> <p>7) Kerberos Realm: EXAMPLE.LOCAL.</p> <p>8) hostname for the KDC Server: CS001, CS002, CS003</p> <p>What the configuration file <code>krb5.conf</code> for Kerberos client should look like? How in Golang application I can correctly send token to Kerberos?</p> </div>

【kafka】kafka消息生产者抛异常

服务器正常运行,kafka本来没有问题,从某一时刻开始,就一直抛异常了,重启服务器后恢复正常。 请问是什么原因? 绝大部分消息抛异常了,还有很小一部分没有抛异常,但也没收到,这些丢失但没有异常的又是什么原因? 已知情况:kafka消息生产者 没有指定partition ![kafka消息生产者异常](https://img-ask.csdn.net/upload/201804/28/1524849596_87543.png)

用sarama编写Kafka生产者时的时间戳无效

<div class="post-text" itemprop="text"> <p>I have a Kafka instance running (locally, in a Docker) and I created a producer in Go, using the <a href="https://godoc.org/github.com/Shopify/sarama" rel="nofollow noreferrer">sarama package</a>.</p> <p>As I want to use Kafka Streams on my topic, the producer has to embed a timestamp in the messages, or I get this ugly error message: </p> <blockquote> <p>org.apache.kafka.streams.errors.StreamsException: Input record ConsumerRecord(topic = crawler_events, partition = 0, offset = 0, CreateTime = -1, serialized key size = -1, serialized value size = 187, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = {XXX}) has invalid (negative) timestamp. Possibly because a pre-0.10 producer client was used to write this record to Kafka without embedding a timestamp, or because the input topic was created before upgrading the Kafka cluster to 0.10+. Use a different TimestampExtractor to process this data.</p> </blockquote> <p>Here is the portion of code sending the message in my Go program: </p> <pre class="lang-go prettyprint-override"><code>// Init a connection to the Kafka host, // create the producer, // and count successes and errors in delivery func (c *kafkaClient) init() { config := sarama.NewConfig() config.Producer.Return.Successes = true c.config = *config var err error c.producer, err = sarama.NewAsyncProducer(c.hosts, &amp;c.config) if err != nil { panic(err) } go func() { for range c.producer.Successes() { c.successes++ } }() go func() { for range c.producer.Errors() { c.errors++ } }() } // Send a message to the Kafka topic, WITH TIMESTAMP func (c *kafkaClient) send(event string) { message := &amp;sarama.ProducerMessage{ Topic: c.topic, Value: sarama.StringEncoder(event), Timestamp: time.Now(), } c.producer.Input() &lt;- message c.enqueued++ } </code></pre> <p>As you can see, the timestamp I try to send is <code>time.Now()</code>. </p> <p>When I run the console consumer to see the received timestamps:</p> <pre><code>docker-compose exec kafka /opt/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic crawler_events \ --from-beginning --property print.timestamp=true </code></pre> <p>I see they are all "-1": </p> <pre><code>CreateTime:-1 {"XXX"} </code></pre> <p>When adding a message to the topic with the console producer, I have the expected timestamps like: </p> <pre><code>CreateTime:1539010180284 hello </code></pre> <p>What am I doing wrong? Thanks for your help. </p> </div>

springboot结合kafka时0毫秒关闭消息生产者

自己在做springboot结合kafka的时候,运行消息生产者,结果console显示了生产者的相关配置以及提示生产者在0毫秒内关闭了消息生产者,然后生产者发送消息失败。想请问下各位大佬这到底是是配置文件的问题还是消息生产者发送消息的时候出现了问题啊。 消息发送者代码: @Component @EnableKafka public class MessageSender { @Autowired private KafkaTemplate<String,String> kafkaTemplate; //private static final MessageSender sender=new MessageSender(); /* * * kafka客户端发送消息 * @param topic 主题 * @param message 消息内容 * @return*/ public boolean sendMessage(String topic,String message) { try { System.out.println("topic"+topic+"message"+message); kafkaTemplate.send(topic, message); } catch (Exception e) { return false; } return true; } } 控制台具体信息如下: 2019-07-09 22:43:34.008 INFO 7916 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: acks = 1 batch.size = 65536 bootstrap.servers = [192.168.2.2:9092] buffer.memory = 524288 client.id = compression.type = none connections.max.idle.ms = 540000 enable.idempotence = false interceptor.classes = null key.serializer = class org.apache.kafka.common.serialization.StringDeserializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringDeserializer 2019-07-09 22:43:34.019 INFO 7916 --- [nio-8080-exec-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. 2019-07-09 22:43:34.040 DEBUG 7916 --- [nio-8080-exec-1] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade@2e251d9f 2019-07-09 22:43:34.077 DEBUG 7916 --- [nio-8080-exec-2] o.s.b.w.s.f.OrderedRequestContextFilter : Bound request context to thread: org.apache.catalina.connector.RequestFacade@2e251d9f 2019-07-09 22:43:34.112 DEBUG 7916 --- [nio-8080-exec-2] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade@2e251d9f ``` ```

kafkaconsumer数据不能批量消费

我用kafkaconsumer批量消费数据, 无法获取获取批量, 从日志看offset提交也异常, 规律性的每3次提交成功一次, 数据每次获取一条, 无法批量获取。 困扰了两天了, 莫名啊。 ``` public class KafkaManualConsumer { public static void main(String[] args) { Properties properties = new Properties(); System.setProperty("java.security.auth.login.config", "c:/kafka_client_jaas.conf"); //配置文件路径 properties.put("security.protocol", "SASL_PLAINTEXT"); properties.put("sasl.mechanism", "PLAIN"); properties.put("bootstrap.servers", "VM_0_16_centos:9092"); //kafka:9092 properties.put("enable.auto.commit", "false"); //properties.put("session.timeout.ms", 60000); properties.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, 60000); properties.put("fetch.max.wait.ms", 5000); properties.put("max.poll.records", 5000); properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); properties.put("group.id", "yuu67u36"); // properties.put("receive.buffer.bytes", 3276800); // properties.put("heartbeat.interval.ms", 59000); // properties.put("client.id", "t4t5t234f34f3f"); // properties.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 32*1024*1024); // properties.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 64*1024*1024); // properties.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG, 128*1024*1024); //properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // properties.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 2000*1024); KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties); kafkaConsumer.subscribe(Arrays.asList("topic-video-dev-attendphotos")); //kafkaConsumer.subscribe(Arrays.asList("topic-video-dev-stat")); while (true) { ConsumerRecords<String, String> records = kafkaConsumer.poll(1000L); System.out.println("-----------------"); System.out.println(records.count()); for (ConsumerRecord<String, String> record : records) { System.out.println("offset = " + record.offset()); VideoPhotoOuter dto = JSON.parseObject(record.value(), VideoPhotoOuter.class); System.out.println(dto.getPhotos().get(0).getPhotoFmt()); //System.out.printf("offset = %d, value = %s", record.offset(), record.value()); } try { kafkaConsumer.commitSync(); Thread.currentThread().sleep(1000L); } catch(Exception ex) { //手动抛出SQLException使用事务回滚 } } //kafkaConsumer.close(); } } ``` 下面是控制台日志 ``` 17:32:06.758 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [VM_0_16_centos:9092] check.crcs = true client.dns.lookup = default client.id = t4t5t234f34f3f client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 5000 fetch.min.bytes = 1 group.id = yuu67u36 group.instance.id = null heartbeat.interval.ms = 59000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 5000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 3276800 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 60000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = PLAIN security.protocol = SASL_PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 60000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer ....................... ........................ ----------------- 0 17:32:08.068 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90261 for partition topic-video-dev-attendphotos-0 ----------------- 0 17:32:10.095 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90261 for partition topic-video-dev-attendphotos-0 17:32:12.091 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Node 90 sent a full fetch response that created a new incremental fetch session 725149318 with 1 response partition(s) 17:32:12.092 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Fetch READ_UNCOMMITTED at offset 90261 for partition topic-video-dev-attendphotos-0 returned fetch data (error=NONE, highWaterMark=93666, lastStableOffset = 93666, logStartOffset = 5372, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576) 17:32:12.120 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topic-video-dev-attendphotos.bytes-fetched 17:32:12.120 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topic-video-dev-attendphotos.records-fetched 17:32:12.121 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic-video-dev-attendphotos-0.records-lag 17:32:12.121 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name topic-video-dev-attendphotos-0.records-lead 17:32:12.122 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Added READ_UNCOMMITTED fetch request for partition topic-video-dev-attendphotos-0 at position FetchPosition{offset=90262, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=VM_0_16_centos:9092 (id: 90 rack: null), epoch=0}} to node VM_0_16_centos:9092 (id: 90 rack: null) 17:32:12.122 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Built incremental fetch (sessionId=725149318, epoch=1) for node 90. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 1 partition(s) 17:32:12.122 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(topic-video-dev-attendphotos-0), toForget=(), implied=()) to broker VM_0_16_centos:9092 (id: 90 rack: null) ----------------- 1 offset = 90261 JPG 17:32:12.239 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90262 for partition topic-video-dev-attendphotos-0 ----------------- 0 17:32:14.256 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90262 for partition topic-video-dev-attendphotos-0 ----------------- 0 17:32:16.279 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Committed offset 90262 for partition topic-video-dev-attendphotos-0 17:32:16.603 [kafka-coordinator-heartbeat-thread | yuu67u36] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Node 90 sent an incremental fetch response for session 725149318 with 1 response partition(s) 17:32:16.603 [kafka-coordinator-heartbeat-thread | yuu67u36] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Fetch READ_UNCOMMITTED at offset 90262 for partition topic-video-dev-attendphotos-0 returned fetch data (error=NONE, highWaterMark=93668, lastStableOffset = 93668, logStartOffset = 5372, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576) 17:32:17.280 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Added READ_UNCOMMITTED fetch request for partition topic-video-dev-attendphotos-0 at position FetchPosition{offset=90263, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=VM_0_16_centos:9092 (id: 90 rack: null), epoch=0}} to node VM_0_16_centos:9092 (id: 90 rack: null) 17:32:17.281 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Built incremental fetch (sessionId=725149318, epoch=2) for node 90. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 1 partition(s) 17:32:17.281 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=t4t5t234f34f3f, groupId=yuu67u36] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(topic-video-dev-attendphotos-0), toForget=(), implied=()) to broker VM_0_16_centos:9092 (id: 90 rack: null) ----------------- 1 ```

如何使用Sarama Go Kafka Consumer从最新的抵消量消耗?

<div class="post-text" itemprop="text"> <p>我有三个问题:</p> <ol> <li>“oldest offset”是什么意思? Oldest offset并不意味着偏移量为0?</li> </ol> <blockquote> <p>// OffsetOldest stands for the oldest offset available on the broker for a<br> // partition.<br> OffsetOldest int64 = -2</p> </blockquote> <ol start="2"> <li><p>假设:</p> <p>A. 三个broker运行在一台机器上<br> B. 使用者群体只有一个使用者线程<br> C. 使用者信任OffsetOldest标志<br> D. 已经产生了100条消息,目前使用者线程已经消耗了90条消息</p> <p>因此,如果使用者线程重新启动,那么该使用者将从哪个偏移量开始?是91还是0?</p></li> <li><p>在下面的代码中,似乎每次启动使用者时都会重新消耗消息,但实际上这种情况不应该发生。为什么重新消费会紧接着重新启动后发生(不是全部) ?</p> <pre><code> func (this *consumerGroupHandler) ConsumeClaim(session sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error { for message := range claim.Messages() { this.handler(message) session.MarkMessage(message, "") } return nil } ctx := context.Background() conf := sarama.NewConfig() conf.Version = sarama.V2_0_0_0 conf.Consumer.Offsets.Initial = sarama.OffsetOldest conf.Consumer.Return.Errors = true consumer, err := sarama.NewConsumerGroup(strings.Split(app.Config().KafkaBrokers, ","), groupId, conf) if err != nil { logger.Error("NewConsumerGroupFromClient(%s) error: %v", groupId, err) return } </code></pre></li> </ol> </div>

linux 安装 librdkafka 报错

kerberos/include -I/usr/kerberos/include -I../src rdkafka_example.c -o rdkafka_example \ ../src/librdkafka.a -lpthread -lz -L/usr/kerberos/lib64 -lcrypto -ldl -lz -L/usr/kerberos/lib64 -lssl -lcrypto -ldl -lz -lsasl2 -lrt -ldl ../src/librdkafka.a(crc32c.o): In function `crc32c_sw': /home/interface/cpc/librdkafka-master/src/crc32c.c:116: undefined reference to `le64toh' collect2: ld 返回 1 make[1]: *** [rdkafka_example] 错误 1 make[1]: Leaving directory `/home/interface/cpc/librdkafka-master/examples' make: *** [examples] 错误 2 系统是 redhat 5.7

kylin bulid cube 超时问题该如何解决

如图: ![图片说明](https://img-ask.csdn.net/upload/201811/07/1541579752_827015.png) log:org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 2018-11-07 16:31:52,520 ERROR [http-bio-7070-exec-5] controller.BasicController:61 : org.apache.kylin.rest.exception.InternalErrorException: Timeout expired while fetching topic metadata at org.apache.kylin.rest.controller.CubeController.buildInternal(CubeController.java:396) at org.apache.kylin.rest.controller.CubeController.rebuild2(CubeController.java:377) at org.apache.kylin.rest.controller.CubeController.build2(CubeController.java:368) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) at org.springframework.web.servlet.FrameworkServlet.doPut(FrameworkServlet.java:883) at javax.servlet.http.HttpServlet.service(HttpServlet.java:653) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:158) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:200) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:64) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at com.thetransactioncompany.cors.CORSFilter.doFilter(CORSFilter.java:209) at com.thetransactioncompany.cors.CORSFilter.doFilter(CORSFilter.java:244) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:962) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1115) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:318) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata 本集群为信息:ambari kerberos kafka0.10 hadoop2.5 kylin2.3.2 可以正常读取hive 表格 以及执行sql等。但是流式CUBE就出这些问题 kafka环境变量已配 请求大家帮忙

CDH集群中安装sentry,重启sentry一直起不来

1、在CDH集群中安装sentry重启时报错,以下是报错信息: Tue Apr 21 09:40:21 CST 2015 JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera using /usr/java/jdk1.7.0_45-cloudera as JAVA_HOME using 5 as CDH_VERSION Debug is true storeKey true useTicketCache true useKeyTab true doNotPrompt true ticketCache is null isInitiator true KeyTab is /var/run/cloudera-scm-agent/process/1117-sentry-SENTRY_SERVER/sentry.keytab refreshKrb5Config is true principal is sentry/test1@NBDP.COM tryFirstPass is false useFirstPass is false storePass is false clearPass is false Refreshing Kerberos configuration Acquire TGT from Cache Principal is sentry/test1@NBDP.COM null credentials from Ticket Cache principal is sentry/test1@NBDP.COM Will use keytab Commit Succeeded [Krb5LoginModule]: Entering logout [Krb5LoginModule]: logged out Subject Service did not start successfully; not all of the required roles started: Service has only 0 Sentry Server roles running instead of minimum required 1. 请各位大神帮帮忙,谢谢!

flink搭建standalone模式集群,jobmanager会自动挂掉,只有一直刷的warn日志

flink搭建standalone模式集群,启动后任务提交跟运行正常,gc情况观察了一下也正常,但是jobmanager到晚上会自动挂掉,而且一直刷的warn日志。 flink版本:1.7.2 三台机器,web界面信息正常。 **问题:jobmanager会挂掉,跟这个日志是否有关呢?我希望集群可以稳定跑下去,目前任务只是对接kafka与redis。** warn日志如下: ``` 09-06 14:00:23,430 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: localhost/127.0.0.1:63408 2019-09-06 14:00:23,431 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink-metrics@localhost:63408] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink-metrics@localhost:63408]] Caused by: [Connection refused: localhost/127.0.0.1:63408] 2019-09-06 14:00:23,431 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with java.net.ConnectException: Connection refused: localhost/127.0.0.1:30060 2019-09-06 14:00:23,431 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink-metrics@localhost:30060] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink-metrics@localhost:30060]] Caused by: [Connection refused: localhost/127.0.0.1:30060] ``` 集群启动日志如下: ``` 2019-09-06 13:50:33,581 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-06 13:50:33,582 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.7.2, Rev:ceba8af, Date:11.02.2019 @ 14:17:09 UTC) 2019-09-06 13:50:33,582 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: apps 2019-09-06 13:50:33,816 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: apps 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.161-b12 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 981 MiBytes 2019-09-06 13:50:33,945 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /apps/svr/jdk1.8.0_161 2019-09-06 13:50:33,947 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Hadoop version: 2.6.5 2019-09-06 13:50:33,947 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/home/apps/jfy/flink-1.7.2/log/flink-apps-standalonesession-6-arch-dev-rmq.log 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/home/apps/jfy/flink-1.7.2/conf/log4j.properties 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/home/apps/jfy/flink-1.7.2/conf/logback.xml 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /home/apps/jfy/flink-1.7.2/conf 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /home/apps/jfy/flink-1.7.2/lib/flink-python_2.11-1.7.2.jar:/home/apps/jfy/flink-1.7.2/lib/flink-shaded-hadoop2-uber-1.7.2.jar:/home/apps/jfy/flink-1.7.2/lib/log4j-1.2.17.jar:/home/apps/jfy/flink-1.7.2/lib/slf4j-log4j12-1.7.15.jar:/home/apps/jfy/flink-1.7.2/lib/flink-dist_2.11-1.7.2.jar::: 2019-09-06 13:50:33,948 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-06 13:50:33,949 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 2019-09-06 13:50:33,959 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, 172.31.50.59 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-06 13:50:33,960 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-06 13:50:33,973 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-06 13:50:33,973 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-06 13:50:33,983 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-06 13:50:34,016 INFO org.apache.flink.runtime.security.modules.HadoopModule - Hadoop user set to apps (auth:SIMPLE) 2019-09-06 13:50:34,030 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-06 13:50:34,191 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at 172.31.50.59:6123 2019-09-06 13:50:34,520 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-06 13:50:34,571 INFO akka.remote.Remoting - Starting remoting 2019-09-06 13:50:34,726 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@172.31.50.59:6123] 2019-09-06 13:50:34,733 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@172.31.50.59:6123 2019-09-06 13:50:34,747 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'jobmanager.rpc.address' instead of proper key 'rest.address' 2019-09-06 13:50:34,757 INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory /tmp/blobStore-c7a49a00-4241-463b-97d6-f01795c08cde 2019-09-06 13:50:34,760 INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:22324 - max concurrent requests: 50 - max backlog: 1000 2019-09-06 13:50:34,774 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl - No metrics reporter configured, no metrics will be exposed/reported. 2019-09-06 13:50:34,775 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Trying to start actor system at 172.31.50.59:0 2019-09-06 13:50:34,790 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-06 13:50:34,795 INFO akka.remote.Remoting - Starting remoting 2019-09-06 13:50:34,802 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink-metrics@172.31.50.59:44195] 2019-09-06 13:50:34,803 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Actor system started at akka.tcp://flink-metrics@172.31.50.59:44195 2019-09-06 13:50:34,807 INFO org.apache.flink.runtime.dispatcher.FileArchivedExecutionGraphStore - Initializing FileArchivedExecutionGraphStore: Storage directory /tmp/executionGraphStore-be620752-bb92-49c0-9556-f93d802f61c2, expiration time 3600000, maximum cache size 52428800 bytes. 2019-09-06 13:50:34,834 INFO org.apache.flink.runtime.blob.TransientBlobCache - Created BLOB cache storage directory /tmp/blobStore-ac295e58-8bce-4747-80f5-086a3ddf6874 2019-09-06 13:50:34,850 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'jobmanager.rpc.address' instead of proper key 'rest.address' 2019-09-06 13:50:34,851 WARN org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Upload directory /tmp/flink-web-59e5be3d-7736-4a43-ab10-3c5116bfe201/flink-web-upload does not exist, or has been deleted externally. Previously uploaded files are no longer available. 2019-09-06 13:50:34,852 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Created directory /tmp/flink-web-59e5be3d-7736-4a43-ab10-3c5116bfe201/flink-web-upload for file uploads. 2019-09-06 13:50:34,855 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Starting rest endpoint. 2019-09-06 13:50:35,063 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of main cluster component log file: /home/apps/jfy/flink-1.7.2/log/flink-apps-standalonesession-6-arch-dev-rmq.log 2019-09-06 13:50:35,063 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of main cluster component stdout file: /home/apps/jfy/flink-1.7.2/log/flink-apps-standalonesession-6-arch-dev-rmq.out 2019-09-06 13:50:35,202 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Rest endpoint listening at 172.31.50.59:8081 2019-09-06 13:50:35,202 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - http://172.31.50.59:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000 2019-09-06 13:50:35,202 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Web frontend listening at http://172.31.50.59:8081. 2019-09-06 13:50:35,259 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/resourcemanager . 2019-09-06 13:50:35,274 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/dispatcher . 2019-09-06 13:50:35,288 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - ResourceManager akka.tcp://flink@172.31.50.59:6123/user/resourcemanager was granted leadership with fencing token 00000000000000000000000000000000 2019-09-06 13:50:35,289 INFO org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager - Starting the SlotManager. 2019-09-06 13:50:35,302 INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher akka.tcp://flink@172.31.50.59:6123/user/dispatcher was granted leadership with fencing token 00000000-0000-0000-0000-000000000000 2019-09-06 13:50:35,305 INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all persisted jobs. 2019-09-06 13:50:35,921 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering TaskManager with ResourceID d9ac21b93546848cee400e09e79bf55c (akka.tcp://flink@localhost:32199/user/taskmanager_0) at ResourceManager 2019-09-06 13:50:35,931 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering TaskManager with ResourceID e7f27036fca804c716fd6bada9f1e0d6 (akka.tcp://flink@localhost:28648/user/taskmanager_0) at ResourceManager ```

hive beeline 连接 User: root is not allowed to impersonate root

hive连接beeline,爆权限问题。连接不上,查看了许多帖子都没能解决问题。 ``` beeline> !connect jdbc:hive2://devcrm:10000/default Connecting to jdbc:hive2://devcrm:10000/default Enter username for jdbc:hive2://devcrm:10000/default: root Enter password for jdbc:hive2://devcrm:10000/default: **** 19/04/22 17:25:31 [main]: WARN jdbc.HiveConnection: Failed to connect to devcrm:10000 Error: Could not open client transport with JDBC Uri: jdbc:hive2://devcrm:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=08S01,code=0) ``` hive的hive-site.xml 配置文件 ``` <property> <name>hive.server2.authentication</name> <value>NONE</value> <description> Expects one of [nosasl, none, ldap, kerberos, pam, custom]. Client authentication types. NONE: no authentication check LDAP: LDAP/AD based authentication KERBEROS: Kerberos/GSSAPI authentication CUSTOM: Custom authentication provider (Use with property hive.server2.custom.authentication.class) PAM: Pluggable authentication module NOSASL: Raw transport </description> </property>   <property>     <name>hive.server2.thrift.client.user</name>     <value>root</value>     <description>Username to use against thrift client</description>   </property>   <property>     <name>hive.server2.thrift.client.password</name>     <value>root</value>     <description>Password to use against thrift client</description>   </property> ``` hadoop 的core-site.xml 配置 ``` <configuration> <!--指定namenode的地址--> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.11.207:9000</value> </property> <!--用来指定使用hadoop时产生文件的存放目录--> <property> <name>hadoop.tmp.dir</name> <!--<value>file:/usr/local/kafka/hadoop-2.7.6/tmp</value>--> <value>file:/home/hadoop/temp</value> </property> <!--用来设置检查点备份日志的最长时间--> <!-- <name>fs.checkpoint.period</name> <value>3600</value> --> <!-- 表示设置 hadoop 的代理用户--> <property> <!--表示代理用户的组所属--> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> <property> <!--表示任意节点使用 hadoop 集群的代理用户 hadoop 都能访问 hdfs 集群--> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> </configuration> ```

学Python后到底能干什么?网友:我太难了

感觉全世界营销文都在推Python,但是找不到工作的话,又有哪个机构会站出来给我推荐工作? 笔者冷静分析多方数据,想跟大家说:关于超越老牌霸主Java,过去几年间Python一直都被寄予厚望。但是事实是虽然上升趋势,但是国内环境下,一时间是无法马上就超越Java的,也可以换句话说:超越Java只是时间问题罢。 太嚣张了会Python的人!找工作拿高薪这么简单? https://edu....

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

为什么程序猿都不愿意去外包?

分享外包的组织架构,盈利模式,亲身经历,以及根据一些外包朋友的反馈,写了这篇文章 ,希望对正在找工作的老铁有所帮助

Java校招入职华为,半年后我跑路了

何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

@程序员:GitHub这个项目快薅羊毛

今天下午在朋友圈看到很多人都在发github的羊毛,一时没明白是怎么回事。 后来上百度搜索了一下,原来真有这回事,毕竟资源主义的羊毛不少啊,1000刀刷爆了朋友圈!不知道你们的朋友圈有没有看到类似的消息。 这到底是啥情况? 微软开发者平台GitHub 的一个区块链项目 Handshake ,搞了一个招募新会员的活动,面向GitHub 上前 25万名开发者派送 4,246.99 HNS币,大约价...

用python打开电脑摄像头,并把图像传回qq邮箱【Pyinstaller打包】

前言: 如何悄悄的打开朋友的摄像头,看看她最近过的怎么样,嘿嘿!这次让我带你们来实现这个功能。 注: 这个程序仅限在朋友之间开玩笑,别去搞什么违法的事情哦。 代码 发送邮件 使用python内置的email模块即可完成。导入相应的代码封装为一个send函数,顺便导入需要导入的包 注: 下面的代码有三处要修改的地方,两处写的qq邮箱地址,还有一处写的qq邮箱授权码,不知道qq邮箱授权码的可以去百度一...

C++(继承):19---虚基类与虚继承(virtual)

一、菱形继承 在介绍虚继承之前介绍一下菱形继承 概念:A作为基类,B和C都继承与A。最后一个类D又继承于B和C,这样形式的继承称为菱形继承 菱形继承的缺点: 数据冗余:在D中会保存两份A的内容 访问不明确(二义性):因为D不知道是以B为中介去访问A还是以C为中介去访问A,因此在访问某些成员的时候会发生二义性 缺点的解决: 数据冗余:通过下面“虚继承”技术来解决(见下) 访问...

计算机网络——浅析网络层

一、前言 注意,关于ipv4和ipv6,ipv4是ip协议第4版本,也表示这个版本的ip一共4个字节,同样地,ipv6是ip协议第6版本,也表示这个版本的ip一共6个字节。 关于网络层使用路由器实现互联:在计算机网络的分层结构中,不同层有不同的中继设备: 计算机网络层 中继设备/中继系统 物理层 中继器、集线器Hub 数据链路层 网桥或交换机(交换机是多端口网桥,两者本质上是一个东西) 网络层 路...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

华为初面+综合面试(Java技术面)附上面试题

华为面试整体流程大致分为笔试,性格测试,面试,综合面试,回学校等结果。笔试来说,华为的难度较中等,选择题难度和网易腾讯差不多。最后的代码题,相比下来就简单很多,一共3道题目,前2题很容易就AC,题目已经记不太清楚,不过难度确实不大。最后一题最后提交的代码过了75%的样例,一直没有发现剩下的25%可能存在什么坑。 笔试部分太久远,我就不怎么回忆了。直接将面试。 面试 如果说腾讯的面试是挥金如土...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

女朋友过生日,我花了20分钟给她写了一个代理服务器

女朋友说:“看你最近挺辛苦的,我送你一个礼物吧。你看看想要什么,我来准备。” 我想了半天,从书到鞋子到电子产品最后到生活用品,感觉自己什么都不缺,然后和她说:“你省省钱吧,我什么都不需要。” 她坚持要送:“不行,你一定要说一个礼物,我想送你东西了。” 于是,我认真了起来,拿起手机,上淘宝逛了几分钟,但还是没能想出来缺点什么,最后实在没办法了:“这样吧,如果你实在想送东西,那你就写一个代理服务器吧”...

记一次腾讯面试,我挂在了最熟悉不过的队列上……

腾讯后台面试,面试官问:如何自己实现队列?

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

带了6个月的徒弟当了面试官,而身为高级工程师的我天天修Bug......

即将毕业的应届毕业生一枚,现在只拿到了两家offer,但最近听到一些消息,其中一个offer,我这个组据说客户很少,很有可能整组被裁掉。 想问大家: 如果我刚入职这个组就被裁了怎么办呢? 大家都是什么时候知道自己要被裁了的? 面试软技能指导: BQ/Project/Resume 试听内容: 除了刷题,还有哪些技能是拿到offer不可或缺的要素 如何提升面试软实力:简历, 行为面试,沟通能...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

HTTP与HTTPS的区别

面试官问HTTP与HTTPS的区别,我这样回答让他竖起大拇指!

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

Python爬虫,高清美图我全都要(彼岸桌面壁纸)

爬取彼岸桌面网站较为简单,用到了requests、lxml、Beautiful Soup4

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

Java岗开发3年,公司临时抽查算法,离职后这几题我记一辈子

前几天我们公司做了一件蠢事,非常非常愚蠢的事情。我原以为从学校出来之后,除了找工作有测试外,不会有任何与考试有关的事儿。 但是,天有不测风云,公司技术总监、人事总监两位大佬突然降临到我们事业线,叫上我老大,给我们组织了一场别开生面的“考试”。 那是一个风和日丽的下午,我翘着二郎腿,左手端着一杯卡布奇诺,右手抓着我的罗技鼠标,滚动着轮轴,穿梭在头条热点之间。 “淡黄的长裙~蓬松的头发...

立即提问
相关内容推荐