Yoga_L1n 2017-11-10 07:21 采纳率: 14.3%
浏览 2527
已结题

急!!!!CM配置hive on spark后执行总是报错

急求大神抬一手。。。。。

2017-11-10 14:50:13,485 INFO [main]: ql.Driver (SessionState.java:printInfo(986)) - Launching Job 3 out of 5
2017-11-10 14:50:13,485 INFO [main]: ql.Driver (Driver.java:launchTask(1977)) - Starting task [Stage-1:MAPRED] in serial mode
2017-11-10 14:50:13,485 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - In order to change the average load for a reducer (in bytes):
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - set hive.exec.reducers.bytes.per.reducer=
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - In order to limit the maximum number of reducers:
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - set hive.exec.reducers.max=
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - In order to set a constant number of reducers:
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - set mapreduce.job.reduces=
2017-11-10 14:50:13,676 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - Starting Spark Job = d2163859-72ee-4a25-8e27-e566d8ab548c
2017-11-10 14:51:06,982 WARN [RPC-Handler-3]: client.SparkClientImpl (SparkClientImpl.java:rpcClosed(131)) - Client RPC channel closed unexpectedly.
2017-11-10 14:52:06,901 **WARN [main]: impl.RemoteSparkJobStatus (RemoteSparkJobStatus.java:getSparkJobInfo(152)) - Failed to get job info.
java.util.concurrent.TimeoutException**
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49)

at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:150)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getState(RemoteSparkJobStatus.java:82)
at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:80)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:109)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2017-11-10 14:52:06,915 ERROR [main]: status.SparkJobMonitor (RemoteSparkJobMonitor.java:startMonitor(132)) - Failed to monitor Job[ 2] with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.util.concurrent.TimeoutException)'
org.apache.hadoop.hive.ql.metadata.HiveException: java.util.concurrent.TimeoutException
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:153)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getState(RemoteSparkJobStatus.java:82)
at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:80)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:109)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.util.concurrent.TimeoutException
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:150)
... 24 more
2017-11-10 14:52:06,915 ERROR [main]: status.SparkJobMonitor (SessionState.java:printError(995)) - Failed to monitor Job[ 2] with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.util.concurrent.TimeoutException)'
org.apache.hadoop.hive.ql.metadata.HiveException: java.util.concurrent.TimeoutException

at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:153)```
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getState(RemoteSparkJobStatus.java:82)
at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:80)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:109)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.util.concurrent.TimeoutException
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:150)
... 24 more

2017-11-10 14:52:06,931 ERROR [main]: ql.Driver (SessionState.java:printError(995)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask```
  • 写回答

1条回答 默认 最新

  • 会吐泡泡的小鱼儿 2018-08-27 09:16
    关注

    hadoop确保正常运行,并需要把hadoop的配置文件和hive的配置文件放在spark的配置文件目录下边

    评论

报告相同问题?

悬赏问题

  • ¥20 SQL server表计算问题
  • ¥15 C# P/Invoke的效率问题
  • ¥20 thinkphp适配人大金仓问题
  • ¥20 Oracle替换.dbf文件后无法连接,如何解决?(相关搜索:数据库|死循环)
  • ¥15 数据库数据成问号了,前台查询正常,数据库查询是?号
  • ¥15 算法使用了tf-idf,用手肘图确定k值确定不了,第四轮廓系数又太小才有0.006088746097507285,如何解决?(相关搜索:数据处理)
  • ¥15 彩灯控制电路,会的加我QQ1482956179
  • ¥200 相机拍直接转存到电脑上 立拍立穿无线局域网传
  • ¥15 (关键词-电路设计)
  • ¥15 如何解决MIPS计算是否溢出