引自免费微信小程序:皆我百晓生
针对您提供的Hive启动报错信息,主要问题在于Hive无法连接到本地主机(localhost)的9000端口,错误提示为“拒绝连接”。以下是详细的排查步骤及可能的解决方案:
1. 确认Hadoop服务状态
Hive依赖于Hadoop HDFS服务,而报错中的端口9000通常是Hadoop NameNode服务监听的端口。首先,检查您的Hadoop是否已正确启动且NameNode服务运行正常:
# 在终端执行以下命令查看Hadoop服务状态
$ jps
# 如果Hadoop已启动,您应能看到类似于以下输出:
# NameNode
# DataNode
# SecondaryNameNode (如果启用)
若未看到NameNode进程,您需要启动Hadoop:
# 根据您的Hadoop配置路径,执行启动命令
$ hadoop-daemon.sh start namenode
$ hadoop-daemon.sh start datanode
# 若有SecondaryNameNode,也需启动
$ hadoop-daemon.sh start secondarynamenode
2. 检查Hadoop配置文件
确保core-site.xml
中的fs.defaultFS
属性指向正确的NameNode地址与端口(通常为localhost:9000):
<!-- core-site.xml -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
同时,确认hdfs-site.xml
中的相关配置无误,尤其是与NameNode通信相关的设置。
3. 检查防火墙设置
确保防火墙规则没有阻止对localhost:9000端口的访问。您可以暂时关闭防火墙进行测试,或添加允许规则:
# 临时关闭防火墙(仅用于测试)
$ sudo systemctl stop firewalld
# 或者添加允许规则(具体命令可能因系统不同而异)
$ sudo iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
$ sudo service iptables save
$ sudo service iptables restart
4. 检查网络接口与主机名解析
确保Hive和Hadoop配置中涉及的主机名(如localhost)能够正确解析,并且指向当前机器的IP地址(通常是127.0.0.1)。检查/etc/hosts
文件:
# /etc/hosts
127.0.0.1 localhost
5. 检查Hive配置
确保Hive配置文件hive-site.xml
中的Hadoop相关配置指向正确的Hadoop安装目录及配置文件:
<!-- hive-site.xml -->
<property>
<name>hive.metastore.uris</name>
<!-- 如果使用内嵌 metastore,此项可省略 -->
<!--<value>thrift://localhost:9083</value>-->
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<!-- 若使用内嵌 Derby 数据库作为元存储 -->
<value>jdbc:derby:;databaseName=metastore_db;create=true</value>
<!-- 若使用其他数据库作为元存储,确保数据库已启动并正确配置连接信息 -->
</property>
<property>
<name>hive.execution.engine</name>
<value>mr</value>
<!-- 若Hadoop版本支持,可改为 'tez' 或 'spark' -->
</property>
<!-- 指定Hadoop配置目录 -->
<property>
<name>hive.mapred.reduce.tasks.speculative.execution</name>
<value>false</value>
</property>
<property>
<name>hive.exec.reducers.bytes.per.reducer</name>
<value>67108864</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>localhost</value>
</property>
<property>
<name>hive.server2.transport.mode</name>
<value>binary</value>
</property>
<property>
<name>hive.server2.authentication</name>
<value>NONE</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.schema.verification.record.version</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>300</value>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>3</value>
</property>
<property>
<name>hive.metastore.failure.retries</name>
<value>3</value>
</property>
<property>
<name>hive.metastore.failure.retry.interval</name>
<value>60</value>
</property>
<property>
<name>hive.metastore.cache.pinobjtypes</name>
<value>Table,Database,Type,FieldSchema,Order</value>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>300</value>
</property>
<property>
<name>hive.metastore.execute.setugi</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.partition.name.whitelist.pattern</name>
<value>.*</value>
</property>
<property>
<name>hive.metastore.disallow.incompatible.col.type.changes</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.disallow.unsupported.dateFormats</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.check.schema.validation</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.record.version</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.errors</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.unknown.columns</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.column.order</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.nulls</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.extra.fields</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.missing.fields</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.case</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.type.changes</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.data.format</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.partition.values</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.table.properties</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.partition.properties</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.view.properties</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.index.properties</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.constraint.properties</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.partition.location</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.table.location</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.view.location</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.index.location</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.check.schema.validation.ignore.constraint.location</