关于hiveserver2启动的问题

本地hive可以启动,但是hiveserver2无法启动,启动hiveserver2需要配置hive-site.xml吗?还有需要kerberos验证吗?

************************************************************/
2015-09-21 13:51:39,386 INFO [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(339)) - Starting HiveServer2
2015-09-21 13:51:40,590 WARN [main]: util.NativeCodeLoader (NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-09-21 13:51:44,247 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(589)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2015-09-21 13:51:44,551 INFO [main]: metastore.ObjectStore (ObjectStore.java:initialize(289)) - ObjectStore, initialize called
2015-09-21 13:51:50,793 INFO [main]: metastore.ObjectStore (ObjectStore.java:getPMF(370)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2015-09-21 13:51:55,856 INFO [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:(139)) - Using direct SQL, underlying DB is DERBY
2015-09-21 13:51:55,866 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(272)) - Initialized ObjectStore
2015-09-21 13:51:56,799 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles_core(663)) - Added admin role in metastore
2015-09-21 13:51:56,806 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles_core(672)) - Added public role in metastore
2015-09-21 13:51:56,953 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers_core(712)) - No user is added in admin role, since config is empty
2015-09-21 13:51:57,629 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_all_databases
2015-09-21 13:51:57,643 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases

2015-09-21 13:51:57,712 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_functions: db=default pat=*
2015-09-21 13:51:57,713 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=*
2015-09-21 13:52:01,499 INFO [main]: session.SessionState (SessionState.java:createPath(641)) - Created local directory: /home/hadoop/hadoop/hive-1.2.1/tmp/resources
2015-09-21 13:52:01,739 INFO [main]: session.SessionState (SessionState.java:createPath(641)) - Created HDFS directory: /tmp/hive/hadoop/ffb077d3-2845-47db-ae17-0436583484d7
2015-09-21 13:52:01,775 INFO [main]: session.SessionState (SessionState.java:createPath(641)) - Created local directory: /home/hadoop/hadoop/hive-1.2.1/tmp/ffb077d3-2845-47db-ae17-0436583484d7
2015-09-21 13:52:01,788 INFO [main]: session.SessionState (SessionState.java:createPath(641)) - Created HDFS directory: /tmp/hive/hadoop/ffb077d3-2845-47db-ae17-0436583484d7/_tmp_space.db
2015-09-21 13:52:04,648 INFO [main]: service.CompositeService (SessionManager.java:initOperationLogRootDir(135)) - Operation log root directory is created: /home/hadoop/hadoop/hive-1.2.1/tmp/operation_logs
2015-09-21 13:52:04,655 INFO [main]: service.CompositeService (SessionManager.java:createBackgroundOperationPool(90)) - HiveServer2: Background operation thread pool size: 100
2015-09-21 13:52:04,655 INFO [main]: service.CompositeService (SessionManager.java:createBackgroundOperationPool(92)) - HiveServer2: Background operation thread wait queue size: 100
2015-09-21 13:52:04,655 INFO [main]: service.CompositeService (SessionManager.java:createBackgroundOperationPool(95)) - HiveServer2: Background operation thread keepalive time: 10 seconds
2015-09-21 13:52:04,714 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:OperationManager is inited.
2015-09-21 13:52:04,714 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:SessionManager is inited.
2015-09-21 13:52:04,714 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:CLIService is inited.
2015-09-21 13:52:04,714 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:ThriftBinaryCLIService is inited.
2015-09-21 13:52:04,714 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:HiveServer2 is inited.
2015-09-21 13:52:04,715 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:OperationManager is started.
2015-09-21 13:52:04,715 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:SessionManager is started.
2015-09-21 13:52:04,716 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:CLIService is started.
2015-09-21 13:52:04,718 INFO [main]: metastore.ObjectStore (ObjectStore.java:initialize(289)) - ObjectStore, initialize called
2015-09-21 13:52:04,766 INFO [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:(139)) - Using direct SQL, underlying DB is DERBY
2015-09-21 13:52:04,767 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(272)) - Initialized ObjectStore
2015-09-21 13:52:04,767 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_databases: default
2015-09-21 13:52:04,769 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=get_databases: default

2015-09-21 13:52:04,796 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: Shutting down the object store...
2015-09-21 13:52:04,799 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=Shutting down the object store...

2015-09-21 13:52:04,800 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: Metastore shutdown complete.
2015-09-21 13:52:04,800 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=Metastore shutdown complete.

2015-09-21 13:52:04,800 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:ThriftBinaryCLIService is started.
2015-09-21 13:52:04,800 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started.
2015-09-21 13:52:05,802 INFO [Thread-11]: thrift.ThriftCLIService (ThriftBinaryCLIService.java:run(98)) - Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
2015-09-21 13:52:07,476 INFO [org.apache.hadoop.util.JvmPauseMonitor$Monitor@52bd9a27]: util.JvmPauseMonitor (JvmPauseMonitor.java:run(193)) - Detected pause in JVM or host machine (eg GC): pause of approximately 1051ms
No GCs detected

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hiveserver2服务的连接问题
hive启动hiveserver2服务后,Python可以通过该服务操作hive库,请问怎么查看有哪些进程连接到hiveserver2服务了
hive启动问题,hiveserver2一直不启动
2015-09-22 16:50:55,690 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(272)) - Initialized ObjectStore 2015-09-22 16:50:55,691 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_databases: default 2015-09-22 16:50:55,691 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=get_databases: default 2015-09-22 16:50:55,701 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: Shutting down the object store... 2015-09-22 16:50:55,702 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=Shutting down the object store... 2015-09-22 16:50:55,703 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: Metastore shutdown complete. 2015-09-22 16:50:55,703 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=hadoop ip=unknown-ip-addr cmd=Metastore shutdown complete. 2015-09-22 16:50:55,703 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:ThriftBinaryCLIService is started. 2015-09-22 16:50:55,704 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started. 2015-09-22 16:50:55,793 INFO [Thread-11]: thrift.ThriftCLIService (ThriftBinaryCLIService.java:run(98)) - Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads hiveserver2一直卡在启动ThriftBinaryCLIService是为什么?配置文件改了bind.host
Hive 的hiveserver2 启动时中间卡住
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 然后就没有下文了,从这里卡住了
Unable to read HiveServer2 uri from ZooKeeper
Exception in thread "main" java.sql.SQLException: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 uri from ZooKeeper at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:127) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at JDBCExample.main(JDBCExample.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) Caused by: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 uri from ZooKeeper at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.getNextServerUriFromZooKeeper(ZooKeeperHiveClientHelper.java:109) at org.apache.hive.jdbc.Utils.resolveAuthorityUsingZooKeeper(Utils.java:492) at org.apache.hive.jdbc.Utils.resolveAuthority(Utils.java:464) at org.apache.hive.jdbc.Utils.parseURL(Utils.java:371) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:125) ... 9 more Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hiveserver2 at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2231) at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:199) at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.getNextServerUriFromZooKeeper(ZooKeeperHiveClientHelper.java:91) ... 13 more
zeppelin连接hive和spark遇到的问题
1.连接hive的时候 zeppelin使用hiveserver2连接hive,由于元数据过多,赶脚zeppelin每次都在遍历元数据,每次执行语句都有1个多小时的延迟 2.连接sparksql报错 java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT at org.apache.spark.sql.hive.HiveUtils$.hiveClientConfig
hive 运行一段时间自动挂了
centos部署了hadoop、hive环境,可以建表,入数据,查数据,但是运行一段时间进程就挂掉了,日志也没报什么错,这是启动指令:bin/hive --service hiveserver2 &
python链接impala出错
这是什么原因呀!ip没有错 Traceback (most recent call last): File "mid_tables.py", line 17, in <module> cursor= conn.cursor() File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 125, in cursor session = self.service.open_session(user, configuration) File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 995, in open_session resp = self._rpc('OpenSession', req) File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 923, in _rpc response = self._execute(func_name, request) File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 940, in _execute return func(request) File "/usr/lib/python2.6/site-packages/impala/_thrift_gen/TCLIService/TCLIService.py", line 175, in OpenSession return self.recv_OpenSession() File "/usr/lib/python2.6/site-packages/impala/_thrift_gen/TCLIService/TCLIService.py", line 193, in recv_OpenSession result.read(self._iprot) File "/usr/lib/python2.6/site-packages/impala/_thrift_gen/TCLIService/TCLIService.py", line 1109, in read fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) AttributeError: 'TBufferedTransport' object has no attribute 'trans' ![图片说明](https://img-ask.csdn.net/upload/201704/26/1493177392_630187.png)
python执行cursor = conn.cursor()报错
在执行cursor = conn.cursor()报错 ![图片说明](https://img-ask.csdn.net/upload/201704/26/1493196649_619205.png) Traceback (most recent call last): File "mid_tables.py", line 17, in <module> cursor= conn.cursor() File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 125, in cursor session = self.service.open_session(user, configuration) File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 995, in open_session resp = self._rpc('OpenSession', req) File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 923, in _rpc response = self._execute(func_name, request) File "/usr/lib/python2.6/site-packages/impala/hiveserver2.py", line 940, in _execute return func(request) File "/usr/lib/python2.6/site-packages/impala/_thrift_gen/TCLIService/TCLIService.py", line 175, in OpenSession return self.recv_OpenSession() File "/usr/lib/python2.6/site-packages/impala/_thrift_gen/TCLIService/TCLIService.py", line 193, in recv_OpenSession result.read(self._iprot) File "/usr/lib/python2.6/site-packages/impala/_thrift_gen/TCLIService/TCLIService.py", line 1109, in read fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) AttributeError: 'TBufferedTransport' object has no attribute 'trans'
Hive如何查询和kill掉hive正在执行的任务
Hive对Hadoop MapReduce任务进行封装,通过jdbc的api接口可以发起hive任务。有些任务可能会解析一个或多个mapreduce任务。 如何监控hive的任务,再在外部时间较长的情况下,杀掉某些hive任务。 (1)通过JDBC接口执行一条SQL语句时,这条SQL语句被转换成几个MR任务,每个MR任务的JobId是多少,如何维护这条SQL语句与MR任务的对应关系? (2)如何获取MR任务的运行状态,通过JobClient? (3)如何杀掉hive任务,及hive解析的mapreduce任务? 补充一点,发起任务,是通过远程java api发起的,后续查杀任务也需要用代码实现。人工看界面,或者到mr任务平台查看信息等方式都不考虑。 1是看是否有官方api,2看看有没有方式和hiveserver交互获取提交任务的信息。
hive beeline 连接 User: root is not allowed to impersonate root
beeline 连接不上。已经困扰我半个月,请各位师兄指点一下。我部署hadoop是单机版的。hive 能做查询,能简历数据库。 用!connect jdbc:hive2://devcrm:10000 出现权限问题 ``` beeline> !connect jdbc:hive2://devcrm:10000 Connecting to jdbc:hive2://devcrm:10000 Enter username for jdbc:hive2://devcrm:10000: hadoop Enter password for jdbc:hive2://devcrm:10000: ****** 19/04/23 15:36:53 [main]: WARN jdbc.HiveConnection: Failed to connect to devcrm:10000 Error: Could not open client transport with JDBC Uri: jdbc:hive2://devcrm:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop (state=08S01,code=0) ``` 用 beeline -u jdbc:hive2//devcrm:10000 -n hadoop连接也不行 ``` [root@devcrm hadoop]# beeline -u jdbc:hive2//devcrm:10000 -n hadoop SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/kafka/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/kafka/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] scan complete in 1ms scan complete in 963ms No known driver to handle "jdbc:hive2//devcrm:10000" Beeline version 2.3.0 by Apache Hive ``` hive-site.xml文件 ``` <configuration> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123</value> <description>password to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.12.77:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>hive.server2.thrift.client.user</name> <value>hadoop</value> <description>Username to use against thrift client</description> </property> <property> <name>hive.server2.thrift.client.password</name> <value>hadoop</value> <description>Password to use against thrift client</description> </property> ``` core-site.xml文件 ``` <configuration> <!--指定namenode的地址--> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.11.207:9000</value> </property> <!--用来指定使用hadoop时产生文件的存放目录--> <property> <name>hadoop.tmp.dir</name> <!--<value>file:/usr/local/kafka/hadoop-2.7.6/tmp</value>--> <value>file:/home/hadoop/temp</value> </property> <!--用来设置检查点备份日志的最长时间--> <!-- <name>fs.checkpoint.period</name> <value>3600</value> --> <!-- 表示设置 hadoop 的代理用户--> <property> <!--表示任意节点使用 hadoop 集群的代理用户 hadoop 都能访问 hdfs 集群--> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <!--表示代理用户的组所属--> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> </configuration> ``` hdfs-site.xml 文件 ``` <configuration> <!--指定hdfs保存数据的副本数量--> <property> <name>dfs.replication</name> <value>1</value> </property> <!--指定hdfs中namenode的存储位置--> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/kafka/hadoop-2.7.6/tmp/dfs/name</value> </property> <!--指定hdfs中datanode的存储位置--> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/kafka/hadoop-2.7.6/tmp/dfs/data</value> </property> <property> <name>dfs.secondary.http.address</name> <value>192.168.11.207:50090</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <!-- 表示启用 webhdfs--> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration> ``` http://192.168.11.207:10002/页面能看到HiveServer2的启动时间 ![图片说明](https://img-ask.csdn.net/upload/201904/23/1556005658_291513.png) hive 的日志 ``` 2019-04-24T09:20:11,829 INFO [main] http.HttpServer: Started HttpServer[hiveserver2] on port 10002 2019-04-24T09:20:50,464 INFO [HiveServer2-Handler-Pool: Thread-38] thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,494 INFO [b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,495 INFO [b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38] conf.HiveConf: Using the default value passed in for log id: b0f59ac1-d17a-404f-8bf5-fbe4693c9964 2019-04-24T09:20:50,494 WARN [HiveServer2-Handler-Pool: Thread-38] service.CompositeService: Failed to open session java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:89) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:362) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:193) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:440) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:322) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1377) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1362) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[hive-exec-2.3.0.jar:2.3.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_80] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_80] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80] Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:606) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] ... 21 more Caused by: org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate hadoop at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.Client.call(Client.java:1413) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776) ~[hadoop-hdfs-2.7.6.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy30.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:704) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:650) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] ... 21 more 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Updating thread name to b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,494 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Updating thread name to b0f59ac1-d17a-404f-8bf5-fbe4693c9964 HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,495 INFO [HiveServer2-Handler-Pool: Thread-38] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-38 2019-04-24T09:20:50,509 WARN [HiveServer2-Handler-Pool: Thread-38] thrift.ThriftCLIService: Error opening session: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:419) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:362) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:193) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:440) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:322) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1377) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1362) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[hive-exec-2.3.0.jar:2.3.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_80] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_80] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80] Caused by: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:89) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] ... 13 more Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate hadoop at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:606) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] ... 13 more Caused by: org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate hadoop at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.Client.call(Client.java:1413) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776) ~[hadoop-hdfs-2.7.6.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.6.jar:?] at com.sun.proxy.$Proxy30.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) ~[hadoop-hdfs-2.7.6.jar:?] at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425) ~[hadoop-common-2.7.6.jar:?] at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:704) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:650) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) ~[hive-exec-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionImpl.open(HiveSessionImpl.java:164) ~[hive-service-2.3.0.jar:2.3.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_80] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_80] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_80] at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_80] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-2.3.0.jar:2.3.0] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-2.3.0.jar:2.3.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_80] at javax.security.auth.Subject.doAs(Subject.java:415) ~[?:1.7.0_80] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) ~[hadoop-common-2.7.6.jar:?] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) ~[hive-service-2.3.0.jar:2.3.0] at com.sun.proxy.$Proxy36.open(Unknown Source) ~[?:?] at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:410) ~[hive-service-2.3.0.jar:2.3.0] ... 13 more ```
JDBC连接hive连接超时
hiveserver2启动了,然后日志也正常,但是用kettle连接或者自己的java代码用jdbc连接都是报错,报错日志如下: java.sql.SQLException: Could not open connection to jdbc:hive2://192.168.162.129:10000/hivedb: java.net.ConnectException: Connection timed out: connect at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:206) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:178) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at com.ljq.hive.HiveJdbcClient.run(HiveJdbcClient.java:21) at com.ljq.hive.HiveJdbcClient.main(HiveJdbcClient.java:46) Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection timed out: connect at org.apache.thrift.transport.TSocket.open(TSocket.java:185) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:248) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203) ... 6 more Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at org.apache.thrift.transport.TSocket.open(TSocket.java:180) ... 9 more error 实在是不知道怎么搞了
jdbc访问impala的时候加载驱动报错,怎样解决?
java.sql.SQLException: [Simba][ImpalaJDBCDriver](500151) Error setting/closing session: {0}. at com.cloudera.hivecommon.api.HS2Client.openSession(Unknown Source) at com.cloudera.hivecommon.api.HS2Client.<init>(Unknown Source) at com.cloudera.hivecommon.api.HiveServer2ClientFactory.createClient(Unknown Source) at com.cloudera.hivecommon.core.HiveJDBCCommonConnection.connect(Unknown Source) at com.cloudera.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source) at com.cloudera.jdbc.common.AbstractDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) Caused by: com.cloudera.support.exceptions.GeneralException: [Simba][ImpalaJDBCDriver](500151) Error setting/closing session: {0}. ... 8 more Caused by: org.apache.thrift.TApplicationException: Invalid method name: 'OpenSession' at org.apache.thrift.TApplicationException.read(TApplicationException.java:108) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:159) at com.cloudera.hivecommon.api.HS2ClientWrapper.recv_OpenSession(Unknown Source) at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:146) at com.cloudera.hivecommon.api.HS2ClientWrapper.OpenSession(Unknown Source) at com.cloudera.hivecommon.api.HS2Client.openSession(Unknown Source) at com.cloudera.hivecommon.api.HS2Client.<init>(Unknown Source) at com.cloudera.hivecommon.api.HiveServer2ClientFactory.createClient(Unknown Source) at com.cloudera.hivecommon.core.HiveJDBCCommonConnection.connect(Unknown Source) at com.cloudera.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source) at com.cloudera.jdbc.common.AbstractDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at com.impala.test.Test.main(Test.java:23)
hive配置oarcle为metastore报错ORA-01754
在hive配置远程模式metastore为Oracle,启动正常,创建表时报错 ``` hive> create table dht_tab(name1 int,name45 varchar(50))row format delimited fields terminated by '\t'; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : ORA-01754: a table may contain only one column of type LONG java.sql.SQLSyntaxErrorException: ORA-01754: a table may contain only one column of type LONG ``` 根据网上修改"hive/lib/hive-metastore-1.2.1.jar"包中package.jdo文件,将LONGVARCHAR类型修改为CLOB,操作如下: ``` cd $HIVE\_HOME/lib mkdir temp cp hive-metastore-1.2.1.jar temp cd temp jar -xvf hive-metastore-1.2.1.jar sed -i -e 's/LONGVARCHAR/CLOB/g' package.jdo jar cfm hive-metastore-1.2.1.jar META-INF/MANIFEST.MF * cp hive-metastore-1.2.1.jar $HIVE_HOME/lib ``` 再通过 hive --service metastore 初始化hive 再进入hive创建表仍然报相同错误。 求解???? 还有一个问题 就是hive --service hiveserver2 没有反应 ![图片说明](https://img-ask.csdn.net/upload/201604/19/1461049192_64211.png) 感谢!
hive条件查询 起完job就卡住不动了,也不报错,日志也没问题, 这是什么原因???
Starting HiveServer2 Hive history file=/tmp/hive/hive_job_log_93d99855-d258-4c9e-b0ad-a1fad756d589_377643780.txt Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_1393569994072_0007, Tracking URL = http://hnbc-c:8088/proxy/application_1393569994072_0007/ Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1393569994072_0007
hive jdbc连接不成功。。
用JDBC 连接hive,不成功。后台报错显示下面的日志。。。 [HiveServer2-Handler-Pool: Thread-28]: server.TThreadPoolServer (TThreadPoolServer.java:run(253)) - Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:230) at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184) at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:262) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 4 more
hive连接mysql 报错 readonly server
各位大侠,我搭建了一个hadoop环境,用hive做数据仓库,mysql做hive的元数据仓库,用于定时分析用户数据中的日志文件,但在hive访问mysql的过程中,不定时的报如下错误: java.sql.SQLException: Query returned non-zero code: 1, cause: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: Could not retrieve transation read-only status server at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:784) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.createTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1374) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1407) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:1884) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:96) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:607) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:595) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) at com.sun.proxy.$Proxy11.createTable(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:670) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3959) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:295) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994) at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:197) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) NestedThrowablesStackTrace: java.sql.SQLException: Could not retrieve transation read-only status server at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1094) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:997) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:983) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:928) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:949) at com.mysql.jdbc.ConnectionImpl.isReadOnly(ConnectionImpl.java:3967) at com.mysql.jdbc.ConnectionImpl.isReadOnly(ConnectionImpl.java:3938) at com.jolbox.bonecp.ConnectionHandle.isReadOnly(ConnectionHandle.java:867) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:422) at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:270) at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162) at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197) at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105) at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005) at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386) at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827) at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571) at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513) at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232) at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218) at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065) at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913) at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:784) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.createTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1374) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1407) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:1884) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:96) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:607) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:595) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) at com.sun.proxy.$Proxy11.createTable(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:670) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3959) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:295) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994) at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:197) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure 在网上查询说是mysql驱动包的问题,但更新了mysql驱动之后,依然还是会报这个错误, 有时候能执行成功,有时候执行不成功,不知道各位大侠谁遇到过类似的问题!
通过JDBC远程连接Hive,查看日志出现以下错误(小白在线等)
2016-11-17 09:03:01,939 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-38]: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184) at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 4 more
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
Linux(服务器编程):15---两种高效的事件处理模式(reactor模式、proactor模式)
前言 同步I/O模型通常用于实现Reactor模式 异步I/O模型则用于实现Proactor模式 最后我们会使用同步I/O方式模拟出Proactor模式 一、Reactor模式 Reactor模式特点 它要求主线程(I/O处理单元)只负责监听文件描述符上是否有事件发生,有的话就立即将时间通知工作线程(逻辑单元)。除此之外,主线程不做任何其他实质性的工作 读写数据,接受新的连接,以及处...
为什么要学数据结构?
一、前言 在可视化化程序设计的今天,借助于集成开发环境可以很快地生成程序,程序设计不再是计算机专业人员的专利。很多人认为,只要掌握几种开发工具就可以成为编程高手,其实,这是一种误解。要想成为一个专业的开发人员,至少需要以下三个条件: 1) 能够熟练地选择和设计各种数据结构和算法 2) 至少要能够熟练地掌握一门程序设计语言 3) 熟知所涉及的相关应用领域的知识 其中,后两个条件比较容易实现,而第一个...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n
进程通信方式总结与盘点
​ 进程通信是指进程之间的信息交换。这里需要和进程同步做一下区分,进程同步控制多个进程按一定顺序执行,进程通信是一种手段,而进程同步是目标。从某方面来讲,进程通信可以解决进程同步问题。 ​ 首先回顾下我们前面博文中讲到的信号量机制,为了实现进程的互斥与同步,需要在进程间交换一定的信息,因此信号量机制也可以被归为进程通信的一种方式,但是也被称为低级进程通信,主要原因为: 效率低:一次只可操作少量的...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
听说了吗?阿里双11作战室竟1根网线都没有
双11不光是购物狂欢节,更是对技术的一次“大考”,对于阿里巴巴企业内部运营的基础保障技术而言,亦是如此。 回溯双11历史,这背后也经历过“小米加步枪”的阶段:作战室从随处是网线,交换机放地上的“一地狼藉”;到如今媲美5G的wifi网速,到现场却看不到一根网线;从当年使用商用AP(无线路由器),让光明顶双11当天断网一分钟,到全部使用阿里自研AP……阿里巴巴企业智能事业部工程师们提供的基础保障...
在阿里,40岁的奋斗姿势
在阿里,40岁的奋斗姿势 在阿里,什么样的年纪可以称为老呢?35岁? 在云网络,有这样一群人,他们的平均年龄接近40,却刚刚开辟职业生涯的第二战场。 他们的奋斗姿势是什么样的呢? 洛神赋 “翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。” 爱洛神,爱阿里云 2018年,阿里云网络产品部门启动洛神2.0升...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆  每天早上8:30推送 作者| Mr.K   编辑| Emma 来源| 技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯
程序员该看的几部电影
##1、骇客帝国(1999) 概念:在线/离线,递归,循环,矩阵等 剧情简介: 不久的将来,网络黑客尼奥对这个看似正常的现实世界产生了怀疑。 他结识了黑客崔妮蒂,并见到了黑客组织的首领墨菲斯。 墨菲斯告诉他,现实世界其实是由一个名叫“母体”的计算机人工智能系统控制,人们就像他们饲养的动物,没有自由和思想,而尼奥就是能够拯救人类的救世主。 可是,救赎之路从来都不会一帆风顺,到底哪里才是真实的世界?
入职阿里5年,他如何破解“技术债”?
简介: 作者 | 都铎 作为一名技术人,你常常会听到这样的话: “先快速上线” “没时间改” “再缓一缓吧” “以后再解决” “先用临时方案处理” …… 当你埋下的坑越来越多,不知道哪天哪位同学就会踩上一颗雷。特别赞同“人最大的恐惧就是未知,当技术债可说不可见的时候,才是最让人不想解决的时候。” 作为一个程序员,我们反对复制粘贴,但是我们经常会见到相似的代码,相同的二方包,甚至整个代码...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布了 2019年国民经济报告 ,报告中指出:年末中国大陆总人口(包括31个
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
2020年的1月,我辞掉了我的第一份工作
其实,这篇文章,我应该早点写的,毕竟现在已经2月份了。不过一些其它原因,或者是我的惰性、还有一些迷茫的念头,让自己迟迟没有试着写一点东西,记录下,或者说是总结下自己前3年的工作上的经历、学习的过程。 我自己知道的,在写自己的博客方面,我的文笔很一般,非技术类的文章不想去写;另外我又是一个还比较热衷于技术的人,而平常复杂一点的东西,如果想写文章写的清楚点,是需要足够...
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
Java坑人面试题系列: 包装类(中级难度)
Java Magazine上面有一个专门坑人的面试题系列: https://blogs.oracle.com/javamagazine/quiz-2。 这些问题的设计宗旨,主要是测试面试者对Java语言的了解程度,而不是为了用弯弯绕绕的手段把面试者搞蒙。 如果你看过往期的问题,就会发现每一个都不简单。 这些试题模拟了认证考试中的一些难题。 而 “中级(intermediate)” 和 “高级(ad
深度学习入门笔记(十八):卷积神经网络(一)
欢迎关注WX公众号:【程序员管小亮】 专栏——深度学习入门笔记 声明 1)该文章整理自网上的大牛和机器学习专家无私奉献的资料,具体引用的资料请看参考文献。 2)本文仅供学术交流,非商用。所以每一部分具体的参考资料并没有详细对应。如果某部分不小心侵犯了大家的利益,还望海涵,并联系博主删除。 3)博主才疏学浅,文中如有不当之处,请各位指出,共同进步,谢谢。 4)此属于第一版本,若有错误,还需继续修正与...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
节后首个工作日,企业们集体开晨会让钉钉挂了
By 超神经场景描述:昨天 2 月 3 日,是大部分城市号召远程工作的第一天,全国有接近 2 亿人在家开始远程办公,钉钉上也有超过 1000 万家企业活跃起来。关键词:十一出行 人脸...
Java基础知识点梳理
Java基础知识点梳理 摘要: 虽然已经在实际工作中经常与java打交道,但是一直没系统地对java这门语言进行梳理和总结,掌握的知识也比较零散。恰好利用这段时间重新认识下java,并对一些常见的语法和知识点做个总结与回顾,一方面为了加深印象,方便后面查阅,一方面为了学好java打下基础。 Java简介 java语言于1995年正式推出,最开始被命名为Oak语言,由James Gosling(詹姆
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
你也能看懂的:蒙特卡罗方法
蒙特卡罗方法,也称统计模拟方法,是1940年代中期由于科学技术的发展和电子计算机的发明,而提出的一种以概率统计理论为指导的数值计算方法。是指使用随机数(或更常见的伪随机数)来解决很多计算问题的方法 蒙特卡罗方法可以粗略地分成两类:一类是所求解的问题本身具有内在的随机性,借助计算机的运算能力可以直接模拟这种随机的过程。另一种类型是所求解问题可以转化为某种随机分布的特征数,比如随机事件出现的概率,或...
如何优雅地打印一个Java对象?
你好呀,我是沉默王二,一个和黄家驹一样身高,和刘德华一样颜值的程序员。虽然已经写了十多年的 Java 代码,但仍然觉得自己是个菜鸟(请允许我惭愧一下)。 在一个月黑风高的夜晚,我思前想后,觉得再也不能这么蹉跎下去了。于是痛下决心,准备通过输出的方式倒逼输入,以此来修炼自己的内功,从而进阶成为一名真正意义上的大神。与此同时,希望这些文章能够帮助到更多的读者,让大家在学习的路上不再寂寞、空虚和冷。 ...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
相关热词 c#如何定义数组列表 c#倒序读取txt文件 java代码生成c# c# tcp发送数据 c#解决时间格式带星期 c#类似hashmap c#设置istbox的值 c#获取多线程返回值 c# 包含数字 枚举 c# timespan
立即提问