hive连接beeline,爆权限问题。连接不上,查看了许多帖子都没能解决问题。
beeline> !connect jdbc:hive2://devcrm:10000/default
Connecting to jdbc:hive2://devcrm:10000/default
Enter username for jdbc:hive2://devcrm:10000/default: root
Enter password for jdbc:hive2://devcrm:10000/default: ****
19/04/22 17:25:31 [main]: WARN jdbc.HiveConnection: Failed to connect to devcrm:10000
Error: Could not open client transport with JDBC Uri: jdbc:hive2://devcrm:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=08S01,code=0)
hive的hive-site.xml 配置文件
<property>
<name>hive.server2.authentication</name>
<value>NONE</value>
<description>
Expects one of [nosasl, none, ldap, kerberos, pam, custom].
Client authentication types.
NONE: no authentication check
LDAP: LDAP/AD based authentication
KERBEROS: Kerberos/GSSAPI authentication
CUSTOM: Custom authentication provider
(Use with property hive.server2.custom.authentication.class)
PAM: Pluggable authentication module
NOSASL: Raw transport
</description>
</property>
<property>
<name>hive.server2.thrift.client.user</name>
<value>root</value>
<description>Username to use against thrift client</description>
</property>
<property>
<name>hive.server2.thrift.client.password</name>
<value>root</value>
<description>Password to use against thrift client</description>
</property>
hadoop 的core-site.xml 配置
<configuration>
<!--指定namenode的地址-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.11.207:9000</value>
</property>
<!--用来指定使用hadoop时产生文件的存放目录-->
<property>
<name>hadoop.tmp.dir</name>
<!--<value>file:/usr/local/kafka/hadoop-2.7.6/tmp</value>-->
<value>file:/home/hadoop/temp</value>
</property>
<!--用来设置检查点备份日志的最长时间-->
<!-- <name>fs.checkpoint.period</name>
<value>3600</value>
-->
<!-- 表示设置 hadoop 的代理用户-->
<property>
<!--表示代理用户的组所属-->
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<!--表示任意节点使用 hadoop 集群的代理用户 hadoop 都能访问 hdfs 集群-->
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
</configuration>