新接触了一个项目,org.apache.thrift这个包不明白有什么作用也不知道该怎么用求大神解答

其中还涉及到org.apache.thrift包下的类和接口:
org.apache.thrift.protocol.TStruct
org.apache.thrift.protocol.TField
org.apache.thrift.scheme.IScheme
org.apache.thrift.TFieldIdEnum
org.apache.thrift.meta_data.FieldMetaData
同样也不明白
求大神能帮帮忙

2个回答

其实就是远程服务调用框架,和Java里的那个RMI类似,如果thrift不好懂,可以看看jdkApi里介绍的那个RMI,thrift教程很多,随便百度下就好!!!!

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
使用Thrift API监控Storm集群和Topology获取集群对象报错
org.apache.thrift.transport.TTransportException: Frame size (1213486160) larger than max length (16384000)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at backtype.storm.generated.Nimbus$Client.recv_getClusterInfo(Nimbus.java:465) at backtype.storm.generated.Nimbus$Client.getClusterInfo(Nimbus.java:453) at com.damacheng009.storm.monitor.logic.Logic.getTopoExpSize(Logic.java:29) at com.damacheng009.storm.monitor.Test.get(Test.java:34) at com.damacheng009.storm.monitor.Test.main(Test.java:84) ClusterSummary clusterSummary = client.getClient().getClusterInfo(); client = ClientManager.getClient("192.168.225.128", 8899);
hive jdbc连接不成功。。
用JDBC 连接hive,不成功。后台报错显示下面的日志。。。 [HiveServer2-Handler-Pool: Thread-28]: server.TThreadPoolServer (TThreadPoolServer.java:run(253)) - Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:230) at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184) at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:262) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 4 more
通过JDBC远程连接Hive,查看日志出现以下错误(小白在线等)
2016-11-17 09:03:01,939 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-38]: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184) at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 4 more
连接池查询hive表,查询几次后报错查询不了,重启jboss 又可以正常查询
java.sql.SQLException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection timed out Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection timed out Caused by: java.net.SocketException: Connection timed out java.sql.SQLException: Error while cleaning up the server resources Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
hive连接mysql 报错 readonly server
各位大侠,我搭建了一个hadoop环境,用hive做数据仓库,mysql做hive的元数据仓库,用于定时分析用户数据中的日志文件,但在hive访问mysql的过程中,不定时的报如下错误: java.sql.SQLException: Query returned non-zero code: 1, cause: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: Could not retrieve transation read-only status server at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:784) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.createTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1374) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1407) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:1884) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:96) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:607) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:595) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) at com.sun.proxy.$Proxy11.createTable(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:670) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3959) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:295) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994) at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:197) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) NestedThrowablesStackTrace: java.sql.SQLException: Could not retrieve transation read-only status server at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1094) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:997) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:983) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:928) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:949) at com.mysql.jdbc.ConnectionImpl.isReadOnly(ConnectionImpl.java:3967) at com.mysql.jdbc.ConnectionImpl.isReadOnly(ConnectionImpl.java:3938) at com.jolbox.bonecp.ConnectionHandle.isReadOnly(ConnectionHandle.java:867) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:422) at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:270) at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162) at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197) at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105) at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005) at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386) at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827) at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571) at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513) at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232) at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218) at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065) at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913) at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:784) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.createTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1374) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1407) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:1884) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:96) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:607) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:595) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) at com.sun.proxy.$Proxy11.createTable(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:670) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3959) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:295) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994) at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:197) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure 在网上查询说是mysql驱动包的问题,但更新了mysql驱动之后,依然还是会报这个错误, 有时候能执行成功,有时候执行不成功,不知道各位大侠谁遇到过类似的问题!
hue 查询hive 无法获取日志
2015-04-23 19:00:13,787 WARN [pool-5-thread-1]: thrift.ThriftCLIService (ThriftCLIService.java:FetchResults(538)) - Error fetching results: org.apache.hive.service.cli.HiveSQLException: Expected state FINISHED, but found RUNNING at org.apache.hive.service.cli.operation.Operation.assertState(Operation.java:120) at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:288) at org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:192) at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:471) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:79) at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:37) at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:64) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:493) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:60) at com.sun.proxy.$Proxy14.fetchResults(Unknown Source) at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:405) at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:530) at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1553) at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1538) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:55) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
java使用Kerberos一段时间后过期了,怎么办?
我们的hadoop集群有Kerberos,然后我们用java访问了,用的keytab和krb5.conf的文件访问的 但是项目运行的好好的,过了24小时开始报错,无法访问集群上的hive和hdfs,提示票据不可用 请问该怎么办?具体怎么操作? ``` javax.security.sasl.SaslException: GSS initiate failed at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:196) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) at org.apache.commons.pool.impl.GenericObjectPool.addObject(GenericObjectPool.java:1691) at org.apache.commons.pool.impl.GenericObjectPool.ensureMinIdle(GenericObjectPool.java:1648) at org.apache.commons.pool.impl.GenericObjectPool.access$700(GenericObjectPool.java:192) at org.apache.commons.pool.impl.GenericObjectPool$Evictor.run(GenericObjectPool.java:1784) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 20 common frames omitted ```
关于Kerberos的GSS initiate failed问题,不知道自己分析的对不对,帮看下
目前有个springboot的项目,部署在服务器上,运行1天(观察时间几乎是24小时)后报错,错误如下 ``` javax.security.sasl.SaslException: GSS initiate failed at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:196) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) at org.apache.commons.pool.impl.GenericObjectPool.addObject(GenericObjectPool.java:1691) at org.apache.commons.pool.impl.GenericObjectPool.ensureMinIdle(GenericObjectPool.java:1648) at org.apache.commons.pool.impl.GenericObjectPool.access$700(GenericObjectPool.java:192) at org.apache.commons.pool.impl.GenericObjectPool$Evictor.run(GenericObjectPool.java:1784) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 20 common frames omitted ``` 因为自己对Kerberos了解也不是很多,所以看了些资料,然后去查看自己的krb5.tab文件,妨碍发现这么个东西 ``` default_realm = --------- dns_lookup_kdc = false dns_lookup_realm = false ticket_lifetime = 86400 renew_lifetime = 604800 forwardable = true default_tgs_enctypes = rc4-hmac default_tkt_enctypes = rc4-hmac permitted_enctypes = rc4-hmac udp_preference_limit = 1 kdc_timeout = 3000 [realms] ---.COM = { kdc = --------.com admin_server = --------.com } ``` 问题是不是在于没有 renewable = true ??? 我对比一个网上的配置,少一个这,是因为这个导致没办法renew吗?
hiveAPI + HbaseAPI 使用多线程,报错
private static ResultSet rs1 = null; rs1 = statement.executeQuery(sql1); new Thread(new Runnable() { public void run() { try { while(rs1.next()){ if (rs1.getInt("machine_sk")%3 == 1) { System.out.println(rs1.getInt("machine_sk")); } 错误报告: 1 org.apache.thrift.TApplicationException: FetchResults failed: out of sequence response at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:76) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:501) at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:488) at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:360) at hhh.mis.App$1.run(App.java:81) at java.lang.Thread.run(Thread.java:748) java.sql.SQLException: Error retrieving next row at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:388) at hhh.mis.App$1.run(App.java:81) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.TApplicationException: FetchResults failed: out of sequence response at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:76) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:501) at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:488) at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:360) ... 2 more 就输出了一条数据就报错了,无序化。。。 求大神指教。。。
JDBC连接hive连接超时
hiveserver2启动了,然后日志也正常,但是用kettle连接或者自己的java代码用jdbc连接都是报错,报错日志如下: java.sql.SQLException: Could not open connection to jdbc:hive2://192.168.162.129:10000/hivedb: java.net.ConnectException: Connection timed out: connect at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:206) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:178) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at com.ljq.hive.HiveJdbcClient.run(HiveJdbcClient.java:21) at com.ljq.hive.HiveJdbcClient.main(HiveJdbcClient.java:46) Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection timed out: connect at org.apache.thrift.transport.TSocket.open(TSocket.java:185) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:248) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203) ... 6 more Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at org.apache.thrift.transport.TSocket.open(TSocket.java:180) ... 9 more error 实在是不知道怎么搞了
运行Java程序时报错Error:(3001, 19) java: ncCommonSvc.AsyncClient.microprobe_call不是抽象的
运行Java程序时报错Error:(3001, 19) java: ncCommonSvc.AsyncClient.microprobe_call不是抽象的, 并且未覆盖org.apache.thrift.async.TAsyncMethodCall中的抽象方法getResult();Error:(3013, 19) java: ncCommonSvc.AsyncClient.microprobe_call中的getResult()无法覆盖org.apache.thrift.async.TAsyncMethodCall中的getResult()返回类型void与java.lang.Object不兼容 报这个错误可能是什么原因呢,代码是由C++转成java测试thrift接口的,所以应该不是代码本身的问题,是代码里面实现的方法与引入的thrift库有冲突么 没有接触过Java的小白求教,要用Jmeter工具才用Java的
linux 启动hive时报错
$beeline -u jdbc:hive2://192.168.141.142:10000 Connecting to jdbc:hive2://192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.Utils: Supplied authorities: 192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.Utils: Resolved authority: 192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://192.168.141.142:10000 17/07/11 23:37:39 ERROR jdbc.HiveConnection: Error opening session org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:156) at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:143) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:583) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:192) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:187) at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142) at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207) at org.apache.hive.beeline.Commands.connect(Commands.java:1149) at org.apache.hive.beeline.Commands.connect(Commands.java:1070) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:970) at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:707) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:757) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467) Error: Could not establish connection to jdbc:hive2://192.168.141.142:10000: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) (state=08S01,code=0) Beeline version 1.6.2 by Apache Hive 0: jdbc:hive2://192.168.141.142:10000 (closed)>
各位大佬们,解决下bug啊!!
Exception in thread "main" java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://192.168.42.130:10000/hive_1: null at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:209) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at hive.JDBC_Hive.main(JDBC_Hive.java:16) Caused by: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:227) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:182) ... 4 more 以上是报错信息. 代码如下: package hive; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; public class JDBC_Hive { public static void main(String[] args) throws Throwable { try { Class.forName("org.apache.hive.jdbc.HiveDriver"); } catch (Exception e) { e.printStackTrace(); } String url = "jdbc:hive2://192.168.42.130:10000/hive_1"; Connection conn = DriverManager.getConnection(url,"root","root"); String sql = "select count(*) num from nnn"; PreparedStatement prepareStatement = conn.prepareStatement(sql); long time = System.currentTimeMillis(); ResultSet rs = prepareStatement.executeQuery(sql); while (rs.next()) { int num = rs.getInt("num"); System.out.println(num); } long timeUsed = System.currentTimeMillis() - time; System.out.println("time " + timeUsed + "mm"); rs.close(); prepareStatement.close(); conn.close(); } } hiveserve2开了,thrift开了,jdbc还是连接不上hive.众筹解决方案啊!
web项目连接hive出现如下错误
java.lang.ClassNotFoundException: org.apache.hive.service.cli.thrift.TCLIService$Iface at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:579) at java.sql.DriverManager.getConnection(DriverManager.java:221) at TTT.doGet(TTT.java:63) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:723) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:879) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:617) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1778) at java.lang.Thread.run(Thread.java:722) 代码: try { Class.forName("org.apache.hive.jdbc.HiveDriver"); } catch (ClassNotFoundException e) { e.printStackTrace(); } Connection conn=null; try { conn=DriverManager.getConnection("jdbc:hive2://133.37.135.212:10000", "hive", "hive"); } catch (SQLException e) { e.printStackTrace(); try { conn.close(); } catch (SQLException e1) { e1.printStackTrace(); } } 加入javajar包:hadoop-auth.jar,hadoop-common-2.7.1.2.3.0.0-2557.jar,hive-exec-1.2.1.2.3.0.0-2557.jar,hive-jdbc-1.2.1.2.3.0.0-2557-standalone.jar, hive-jdbc-1.2.1.2.3.0.0-2557.jar jdk版本:1.7
hive _jdbc 连接SASL认证问题
【java代码】 package org.neworigin.hive.Hive_JDBC; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; public class App { public static void main( String[] args ) throws ClassNotFoundException, SQLException { Class.forName("org.apache.hive.jdbc.HiveDriver"); Connection conn=DriverManager.getConnection("jdbc:hive2://192.168.170.100:10000/student"," neworigin","123456 "); PreparedStatement ppst=conn.prepareStatement("select * from jdbchive"); ResultSet rs = ppst.executeQuery(); while(rs.next()){ int id=rs.getInt("id"); String name=rs.getString("name"); int age =rs.getInt("age"); System.out.println(id+","+name+","+age); } conn.close(); ppst.cancel(); rs.close(); } } 【运行结果】 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/Hw_PC/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.4.1/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/C:/Users/Hw_PC/.m2/repository/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 12,chw,22 22,zrt,23 Exception in thread "main" java.sql.SQLException: org.apache.thrift.transport.TTransportException: SASL authentication not complete at org.apache.hive.jdbc.HiveStatement.cancel(HiveStatement.java:174) at org.neworigin.hive.Hive_JDBC.App.main(App.java:24) Caused by: org.apache.thrift.transport.TTransportException: SASL authentication not complete at org.apache.thrift.transport.TSaslTransport.write(TSaslTransport.java:474) at org.apache.thrift.transport.TSaslClientTransport.write(TSaslClientTransport.java:37) at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:178) at org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:106) at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:70) at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62) at org.apache.hive.service.rpc.thrift.TCLIService$Client.send_CancelOperation(TCLIService.java:484) at org.apache.hive.service.rpc.thrift.TCLIService$Client.CancelOperation(TCLIService.java:476) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1412) at com.sun.proxy.$Proxy10.CancelOperation(Unknown Source) at org.apache.hive.jdbc.HiveStatement.cancel(HiveStatement.java:168) ... 1 more (库中的数据可以全部提出来,但是后面老是报认证错误。。。。。。不解。。。。)
spark整合tachyon一直报连接失败
Failed to connect (1) with master @ master/192.168.111.130:19998 : java.net.ConnectException: Connection refused 详细日志: org.apache.thrift.TException: Service name not found in message name: user_getUserId. Did you forget to use a TMultiplexProtocol in your client? at org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:103) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
thrift生成java文件报错
现在linux机器,安装thrift ``` [root@localhost workspace]# thrift -version Thrift version 0.14.0 ``` 然后编写hello.thrift文件 ``` service HelloService { string sayHello (1: string name); } ``` 再执行 ``` thrift -gen java hello.thrift ``` 然后打开当前目录下HelloService.java 把文件放到eclipse下,开始编写服务端和客户端 发现报错 ![图片说明](https://img-ask.csdn.net/upload/201912/10/1575945621_252555.png) ![图片说明](https://img-ask.csdn.net/upload/201912/10/1575945629_175679.png) maven的pom文件: ``` <dependency> <groupId>org.apache.thrift</groupId> <artifactId>libthrift</artifactId> <version>0.10.0</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.5</version> </dependency> ``` 网友们知道什么原因吗,百度也没查到对应问题
hive beeline 连接 User: root is not allowed to impersonate root
beeline 连接不上。已经困扰我半个月,请各位师兄指点一下。我部署hadoop是单机版的。hive 能做查询,能简历数据库。 用!connect jdbc:hive2://devcrm:10000 出现权限问题 ``` beeline> !connect jdbc:hive2://devcrm:10000 Connecting to jdbc:hive2://devcrm:10000 Enter username for jdbc:hive2://devcrm:10000: hadoop Enter password for jdbc:hive2://devcrm:10000: ****** 19/04/23 15:36:53 [main]: WARN jdbc.HiveConnection: Failed to connect to devcrm:10000 Error: Could not open client transport with JDBC Uri: jdbc:hive2://devcrm:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop (state=08S01,code=0) ``` 用 beeline -u jdbc:hive2//devcrm:10000 -n hadoop连接也不行 ``` [root@devcrm hadoop]# beeline -u jdbc:hive2//devcrm:10000 -n hadoop SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/kafka/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/kafka/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] scan complete in 1ms scan complete in 963ms No known driver to handle "jdbc:hive2//devcrm:10000" Beeline version 2.3.0 by Apache Hive ``` hive-site.xml文件 ``` <configuration> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123</value> <description>password to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.12.77:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>hive.server2.thrift.client.user</name> <value>hadoop</value> <description>Username to use against thrift client</description> </property> <property> <name>hive.server2.thrift.client.password</name> <value>hadoop</value> <description>Password to use against thrift client</description> </property> ``` core-site.xml文件 ``` <configuration> <!--指定namenode的地址--> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.11.207:9000</value> </property> <!--用来指定使用hadoop时产生文件的存放目录--> <property> <name>hadoop.tmp.dir</name> <!--<value>file:/usr/local/kafka/hadoop-2.7.6/tmp</value>--> <value>file:/home/hadoop/temp</value> </property> <!--用来设置检查点备份日志的最长时间--> <!-- <name>fs.checkpoint.period</name> <value>3600</value> --> <!-- 表示设置 hadoop 的代理用户--> <property> <!--表示任意节点使用 hadoop 集群的代理用户 hadoop 都能访问 hdfs 集群--> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <!--表示代理用户的组所属--> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> </configuration> ``` hdfs-site.xml 文件 ``` <configuration> <!--指定hdfs保存数据的副本数量--> <property> <name>dfs.replication</name> <value>1</value> </property> <!--指定hdfs中namenode的存储位置--> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/kafka/hadoop-2.7.6/tmp/dfs/name</value> </property> <!--指定hdfs中datanode的存储位置--> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/kafka/hadoop-2.7.6/tmp/dfs/data</value> </property> <property> <name>dfs.secondary.http.address</name> <value>192.168.11.207:50090</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <!-- 表示启用 webhdfs--> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration> ``` http://192.168.11.207:10002/页面能看到HiveServer2的启动时间 ![图片说明](https://img-ask.csdn.net/upload/201904/23/1556005658_291513.png) hive 的日志 ``` 2019-04-24T09:20:11,829 INFO [main] http.HttpServer: Started HttpServer[hiveserver2] on port 10002 2019-04-24T09:20:50,464 INFO [HiveServer2-Handler-Pool: Thread-38] thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,494 INFO [b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,495 INFO [b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,494 WARN [HiveServer2-Handler-Pool: Thread-38] service.CompositeService: Failed to open session java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:89) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:362) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:193) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:440) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:322) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1377) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1362) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[hive-exec-2.3.0.jar:2.3.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_80] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_80] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80] Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:606) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] ... 21 more Caused by: org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate hadoop at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.Client.call(Client.java:1413) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776) ~[hadoop-hdfs-2.7.6.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy30.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:704) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:650) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] ... 21 more 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Updating thread name to b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Updating thread name to b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,495 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,509 WARN [HiveServer2-Handler-Pool: Thread-38] thrift.ThriftCLIService: Error opening session: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:419) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:362) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:193) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:440) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:322) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1377) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1362) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[hive-exec-2.3.0.jar:2.3.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_80] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_80] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80] Caused by: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:89) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] ... 13 more Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:606) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] ... 13 more Caused by: org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate hadoop at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.Client.call(Client.java:1413) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776) ~[hadoop-hdfs-2.7.6.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy30.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:704) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:650) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] ... 13 more ```
storm提交jar的时候连接错误(试了各种办法都不行)
Exception in thread "main" java.lang.RuntimeException: org.apache.thrift7.transport.TTransportException: java.net.ConnectException: Connection refused: connect at backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusClient.java:38) at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:116) at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:70) at storm.TopoSubmitterClient.main(TopoSubmitterClient.java:33) Caused by: org.apache.thrift7.transport.TTransportException: java.net.ConnectException: Connection refused: connect at org.apache.thrift7.transport.TSocket.open(TSocket.java:183) at org.apache.thrift7.transport.TFramedTransport.open(TFramedTransport.java:81) at backtype.storm.security.auth.SimpleTransportPlugin.connect(SimpleTransportPlugin.java:83) at backtype.storm.security.auth.ThriftClient.<init>(ThriftClient.java:63) at backtype.storm.utils.NimbusClient.<init>(NimbusClient.java:47) at backtype.storm.utils.NimbusClient.<init>(NimbusClient.java:43) at backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusClient.java:36) ... 3 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source) at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source) at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) at java.net.AbstractPlainSocketImpl.connect(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at org.apache.thrift7.transport.TSocket.open(TSocket.java:178)
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
Python 植物大战僵尸代码实现(2):植物卡片选择和种植
这篇文章要介绍的是: - 上方植物卡片栏的实现。 - 点击植物卡片,鼠标切换为植物图片。 - 鼠标移动时,判断当前在哪个方格中,并显示半透明的植物作为提示。
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
记一次腾讯面试:进程之间究竟有哪些通信方式?如何通信? ---- 告别死记硬背
有一次面试的时候,被问到进程之间有哪些通信方式,不过由于之前没深入思考且整理过,说的并不好。想必大家也都知道进程有哪些通信方式,可是我猜很多人都是靠着”背“来记忆的,所以今天的这篇文章,讲给大家详细着讲解他们是如何通信的,让大家尽量能够理解他们之间的区别、优缺点等,这样的话,以后面试官让你举例子,你也能够顺手拈来。 1、管道 我们来看一条 Linux 的语句 netstat -tulnp | gr...
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片
网络(8)-HTTP、Socket、TCP、UDP的区别和联系
TCP/IP协议是传输层协议,主要解决数据如何在网络中传输,而HTTP是应用层协议,主要解决如何包装数据。 一、TCP与UDP的不同 1. 是否需要建立连接。 UDP在传送数据之前不需要先建立连接;TCP则提供面向连接的服务; 2. 是否需要给出确认 对方的传输层在收到UDP报文后,不需要给出任何确认,而 TCP需要给出确认报文,要提供可靠的、面向连接的传输服务。 3.虽然UDP不提供可靠交...
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什
防劝退!数据结构和算法难理解?可视化动画带你轻松透彻理解!
大家好,我是 Rocky0429,一个连数据结构和算法都不会的蒟蒻… 学过数据结构和算法的都知道这玩意儿不好学,没学过的经常听到这样的说法还没学就觉得难,其实难吗?真难! 难在哪呢?当年我还是个小蒟蒻,初学数据结构和算法的时候,在忍着枯燥看完定义原理,之后想实现的时候,觉得它们的过程真的是七拐八绕,及其难受。 在简单的链表、栈和队列这些我还能靠着在草稿上写写画画理解过程,但是到了数论、图...
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给
开挂的人生!那些当选院士,又是ACM/IEEE 双料Fellow的华人学者们
昨日,2019年两院院士正式官宣,一时间抢占了各大媒体头条。 朋友圈也是一片沸腾,奔走相告,赶脚比自己中了大奖还嗨皮! 谁叫咱家导师就是这么厉害呢!!! 而就在最近,新一年度的IEEE/ACM Fellow也将正式公布。 作为学术届的顶级荣誉,不自然地就会将院士与Fellow作比较,到底哪个含金量更高呢? 学术君认为,同样是专业机构对学者的认可,考量标准不一,自然不能一概而论。 但...
聊聊C语言和指针的本质
坐着绿皮车上海到杭州,24块钱,很宽敞,在火车上非正式地聊几句。 很多编程语言都以 “没有指针” 作为自己的优势来宣传,然而,对于C语言,指针却是与生俱来的。 那么,什么是指针,为什么大家都想避开指针。 很简单, 指针就是地址,当一个地址作为一个变量存在时,它就被叫做指针,该变量的类型,自然就是指针类型。 指针的作用就是,给出一个指针,取出该指针指向地址处的值。为了理解本质,我们从计算机模型说起...
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库——点这里跳转 文章目录Python语言高频重点汇总**GitHub面试宝典仓库——点这里跳转**1. 函数-传参2. 元类3. @staticmethod和@classmethod两个装饰器4. 类属性和实例属性5. Python的自省6. 列表、集合、字典推导式7. Python中单下划线和双下划线8. 格式化字符串中的%和format9.
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
代码详解:如何用Python快速制作美观、炫酷且有深度的图表
全文共12231字,预计学习时长35分钟生活阶梯(幸福指数)与人均GDP(金钱)正相关的正则图本文将探讨三种用Python可视化数据的不同方法。以可视化《2019年世界幸福报告》的数据为例,本文用Gapminder和Wikipedia的信息丰富了《世界幸福报告》数据,以探索新的数据关系和可视化方法。《世界幸福报告》试图回答世界范围内影响幸福的因素。报告根据对“坎特里尔阶梯问题”的回答来确定幸...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
(经验分享)作为一名普通本科计算机专业学生,我大学四年到底走了多少弯路
今年正式步入了大四,离毕业也只剩半年多的时间,回想一下大学四年,感觉自己走了不少弯路,今天就来分享一下自己大学的学习经历,也希望其他人能不要走我走错的路。 (一)初进校园 刚进入大学的时候自己完全就相信了高中老师的话:“进入大学你们就轻松了”。因此在大一的时候自己学习的激情早就被抛地一干二净,每天不是在寝室里玩游戏就是出门游玩,不过好在自己大学时买的第一台笔记本性能并不是很好,也没让我彻底沉...
如何写一篇技术博客,谈谈我的看法
前言 只有光头才能变强。 文本已收录至我的GitHub精选文章,欢迎Star:https://github.com/ZhongFuCheng3y/3y 我一直推崇学技术可以写技术博客去沉淀自己的知识,因为知识点实在是太多太多了,通过自己的博客可以帮助自己快速回顾自己学过的东西。 我最开始的时候也是只记笔记,认为自己能看得懂就好。但如果想验证自己是不是懂了,可以写成技术博客。在写技术博客的...
字节跳动面试官这样问消息队列:分布式事务、重复消费、顺序消费,我整理了一下
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
面试还搞不懂redis,快看看这40道面试题(含答案和思维导图)
Redis 面试题 1、什么是 Redis?. 2、Redis 的数据类型? 3、使用 Redis 有哪些好处? 4、Redis 相比 Memcached 有哪些优势? 5、Memcache 与 Redis 的区别都有哪些? 6、Redis 是单进程单线程的? 7、一个字符串类型的值能存储最大容量是多少? 8、Redis 的持久化机制是什么?各自的优缺点? 9、Redis 常见性...
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
互联网公司的裁员,能玩出多少种花样?
裁员,也是一门学问,可谓博大精深!以下,是互联网公司的裁员的多种方法:-正文开始-135岁+不予续签的理由:千禧一代网感更强。95后不予通过试用期的理由:已婚已育员工更有责任心。2通知接下来要过苦日子,让一部分不肯同甘共苦的员工自己走人,以“兄弟”和“非兄弟”来区别员工。3强制996。员工如果平衡不了工作和家庭,可在离婚或离职里二选一。4不布置任何工作,但下班前必须提交千字工作日报。5不给活干+...
【设计模式】单例模式的八种写法分析
网上泛滥流传单例模式的写法种类,有说7种的,也有说6种的,当然也不排除说5种的,他们说的有错吗?其实没有对与错,刨根问底,写法终究是写法,其本质精髓大体一致!因此完全没必要去追究写法的多少,有这个时间还不如跟着宜春去网吧偷耳机、去田里抓青蛙得了,一天天的....
《面试宝典》:检验是否为合格的初中级程序员的面试知识点,你都知道了吗?查漏补缺
欢迎关注文章系列,一起学习 《提升能力,涨薪可待篇》 《面试知识,工作可待篇》 《实战演练,拒绝996篇》 也欢迎关注公 众 号【Ccww笔记】,原创技术文章第一时间推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《面试知识,工作可待篇》-Java笔试面试基础知识大全 前言 是不是感觉找工作面试是那么难呢? 在找工作面试应在学习的基础进行总结面试知识点,工作也指日可待,欢...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
微博推荐算法简述
在介绍微博推荐算法之前,我们先聊一聊推荐系统和推荐算法。有这样一些问题:推荐系统适用哪些场景?用来解决什么问题、具有怎样的价值?效果如何衡量? 推荐系统诞生很早,但真正被大家所重视,缘起于以”facebook”为代表的社会化网络的兴起和以“淘宝“为代表的电商的繁荣,”选择“的时代已经来临,信息和物品的极大丰富,让用户如浩瀚宇宙中的小点,无所适从。推荐系统迎来爆发的机会,变得离用户更近: 快...
相关热词 c#如何定义数组列表 c#倒序读取txt文件 java代码生成c# c# tcp发送数据 c#解决时间格式带星期 c#类似hashmap c#设置istbox的值 c#获取多线程返回值 c# 包含数字 枚举 c# timespan
立即提问