还有另外几个WARN
15/05/19 11:19:19 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/05/19 11:19:33 INFO AppClient$ClientActor: Connecting to master spark://172.18.219.136:7077...
15/05/19 11:19:34 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:19:49 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:19:53 INFO AppClient$ClientActor: Connecting to master spark://172.18.219.136:7077...
15/05/19 11:20:04 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:20:13 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/19 11:20:13 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
15/05/19 11:20:13 INFO TaskSchedulerImpl: Cancelling stage 1
15/05/19 11:20:13 INFO DAGScheduler: Failed to run collect at WordCount.scala:31
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
Spark平台下运行WordCount时遇到如下的报错该如何处理?求各路大神指教。。。
- 写回答
- 好问题 0 提建议
- 关注问题
- 邀请回答
-
5条回答 默认 最新
老李家的小二 2015-05-21 03:13关注http://taoistwar.gitbooks.io/spark-operationand-maintenance-management/content/spark_relate_software/hadoop_2x_install.html
spark-env.sh,中export SPARK_MASTER_IP= master节点的机器名或IP
如何是机器名,查看一下/etc/hosts有没有解析主机名本回答被题主选为最佳回答 , 对您是否有帮助呢?解决 无用评论 打赏 举报