∀✿●离烟※暮日●✿∀ 2024-06-09 20:33 采纳率: 0%
浏览 10

hive连接不上怎么解决

img

img

img

img

img

hadoop@Master:/usr/local/hive$ ./bin/hive
2024-06-09 20:23:32,511 INFO conf.HiveConf: Found configuration file file:/usr/local/hive/conf/hive-site.xml
2024-06-09 20:23:33,938 WARN common.LogUtils: DEPRECATED: Ignoring hive-default.xml found on the CLASSPATH at /usr/local/hive/conf/hive-default.xml
Hive Session ID = 60091ed9-7b0b-447f-a13e-4fc414bab475
2024-06-09 20:23:34,066 INFO SessionState: Hive Session ID = 60091ed9-7b0b-447f-a13e-4fc414bab475

Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
2024-06-09 20:23:34,105 INFO SessionState: 
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
2024-06-09 20:23:34,155 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive. Name node is in safe mode.
The reported blocks 14 has reached the threshold 0.9990 of total blocks 14. The minimum number of live datanodes is not required. Name node detected blocks with generation stamps in future. This means that Name node metadata is inconsistent. This can happen if Name node metadata files have been manually replaced. Exiting safe mode will cause loss of 2404 byte(s). Please restart name node with right metadata or use "hdfs dfsadmin -safemode forceExit" if you are certain that the NameNode was started with the correct FsImage and edit logs. If you encountered this during a rollback, it is safe to exit with -safemode forceExit. NamenodeHostName:Master
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1612)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1599)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3440)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1167)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:742)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:651)
    at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:591)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:328)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:241)
Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive. Name node is in safe mode.
The reported blocks 14 has reached the threshold 0.9990 of total blocks 14. The minimum number of live datanodes is not required. Name node detected blocks with generation stamps in future. This means that Name node metadata is inconsistent. This can happen if Name node metadata files have been manually replaced. Exiting safe mode will cause loss of 2404 byte(s). Please restart name node with right metadata or use "hdfs dfsadmin -safemode forceExit" if you are certain that the NameNode was started with the correct FsImage and edit logs. If you encountered this during a rollback, it is safe to exit with -safemode forceExit. NamenodeHostName:Master
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1612)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1599)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3440)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1167)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:742)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2509)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2483)
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1498)
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1495)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1512)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1487)
    at org.apache.hadoop.hive.ql.exec.Utilities.createDirsWithPermission(Utilities.java:3668)
    at org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:4487)
    at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:760)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:701)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:627)
    ... 9 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive. Name node is in safe mode.
The reported blocks 14 has reached the threshold 0.9990 of total blocks 14. The minimum number of live datanodes is not required. Name node detected blocks with generation stamps in future. This means that Name node metadata is inconsistent. This can happen if Name node metadata files have been manually replaced. Exiting safe mode will cause loss of 2404 byte(s). Please restart name node with right metadata or use "hdfs dfsadmin -safemode forceExit" if you are certain that the NameNode was started with the correct FsImage and edit logs. If you encountered this during a rollback, it is safe to exit with -safemode forceExit. NamenodeHostName:Master
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1612)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1599)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3440)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1167)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:742)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)
    at org.apache.hadoop.ipc.Client.call(Client.java:1513)
    at org.apache.hadoop.ipc.Client.call(Client.java:1410)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
    at com.sun.proxy.$Proxy28.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:675)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
    at com.sun.proxy.$Proxy29.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2507)
    ... 20 more

  • 写回答

2条回答 默认 最新

  • 专家-赤兔[在线] 全栈领域优质创作者 2024-06-09 20:33
    关注

    引自免费微信小程序:皆我百晓生

    从你提供的日志信息来看,Hive在尝试与Hadoop HDFS进行通信时遇到了问题,具体是NameNode处于安全模式(safe mode),这阻止了Hive创建必要的目录。安全模式是一种保护机制,防止在集群启动期间对文件系统进行写操作,直到所有DataNodes报告其块状态,以确保数据的一致性。

    要解决这个问题,你需要按照以下步骤操作:

    1. 确认NameNode状态

    首先,确认NameNode确实处于安全模式。可以通过以下命令检查:

    hadoop dfsadmin -report
    

    这个命令会显示HDFS的状态,包括NameNode是否在安全模式下。

    2. 强制退出安全模式

    如果NameNode确实处于安全模式,并且你确定所有DataNodes都已经上线并且报告了它们的块状态,那么可以强制退出安全模式。执行以下命令:

    hadoop dfsadmin -safemode leave
    

    或者,如果你需要强制退出而不检查DataNodes状态(这可能会导致数据不一致,所以谨慎使用):

    hadoop dfsadmin -safemode forceExit
    

    3. 检查Hadoop配置

    确认Hadoop的配置正确无误。查看hdfs-site.xmlcore-site.xml中的设置,特别是fs.defaultFSdfs.replicationdfs.namenode.name.dir等参数是否正确配置。

    4. 重启Hadoop服务

    在执行上述步骤后,可能需要重启Hadoop服务,确保所有更改生效。通常这涉及到停止并重新启动Hadoop的守护进程,例如:

    stop-dfs.sh
    start-dfs.sh
    

    对于YARN集群:

    stop-yarn.sh
    start-yarn.sh
    

    5. 重试Hive连接

    一旦Hadoop服务运行正常,再次尝试启动Hive shell,看是否还遇到相同的问题。

    6. 检查Hive配置

    如果问题仍然存在,检查Hive的配置文件hive-site.xml,确保Hive能够正确找到Hadoop的配置路径,以及hive.metastore.urishive.metastore.warehouse.dir等参数正确设置。

    7. 清理临时目录

    有时,HDFS的临时目录(如/tmp/hive)可能存在问题。在这种情况下,尝试手动清理这些目录,然后重新启动Hadoop和Hive服务。

    通过以上步骤,你应该能够解决Hive连接Hadoop时遇到的安全模式问题。如果问题依然存在,可能需要更深入地检查Hadoop和Hive的日志,查找其他潜在的错误或警告信息。

    评论 编辑记录

报告相同问题?

问题事件

  • 创建了问题 6月9日

悬赏问题

  • ¥50 CCD工业视觉相机检测出现光边
  • ¥20 C++初高中竞赛题,devc++可以通过的
  • ¥60 二次元手游日常任务自动化代肝(相关搜索:自动化)
  • ¥15 mysql将查询的结果作为动态列名怎么实现
  • ¥50 python自动地图截图脚本
  • ¥20 瑞萨RA4M1芯片刷写为arduino r4 minima
  • ¥15 fastreport怎么判断当前页数
  • ¥15 Kylin-Desktop-V10-GFB-Release-JICAI_02- 2207-Build14-ARM64.iso有没有这个版本的系统啊
  • ¥15 能不能通过蓝牙将传感器数据传送到手机上
  • ¥20 100元python和数据科学实验项目