2021-12-03 16:03:50,947 ERROR [org.apache.spark.executor.Executor] - Exception in task 1.0 in stage 2.0 (TID 5)
java.lang.ArrayIndexOutOfBoundsException: 6
at contest3.demo_02$.$anonfun$main$2(demo_02.scala:19)
at contest3.demo_02$.$anonfun$main$2$adapted(demo_02.scala:17)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:513)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-12-03 16:03:50,948 INFO [org.apache.spark.scheduler.TaskSetManager] - Finished task 0.0 in stage 2.0 (TID 4) in 31 ms on hadoop222 (executor driver) (1/2)
2021-12-03 16:03:50,995 WARN [org.apache.spark.scheduler.TaskSetManager] - Lost task 1.0 in stage 2.0 (TID 5, hadoop222, executor driver): java.lang.ArrayIndexOutOfBoundsException: 6
at contest3.demo_02$.$anonfun$main$2(demo_02.scala:19)
at contest3.demo_02$.$anonfun$main$2$adapted(demo_02.scala:17)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:513)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-12-03 16:03:50,997 ERROR [org.apache.spark.scheduler.TaskSetManager] - Task 1 in stage 2.0 failed 1 times; aborting job
2021-12-03 16:03:50,998 INFO [org.apache.spark.scheduler.TaskSchedulerImpl] - Removed TaskSet 2.0, whose tasks have all completed, from pool
2021-12-03 16:03:51,001 INFO [org.apache.spark.scheduler.TaskSchedulerImpl] - Cancelling stage 2
2021-12-03 16:03:51,002 INFO [org.apache.spark.scheduler.TaskSchedulerImpl] - Killing all running tasks in stage 2: Stage cancelled
2021-12-03 16:03:51,003 INFO [org.apache.spark.scheduler.DAGScheduler] - ResultStage 2 (count at demo_02.scala:30) failed in 0.098 s due to Job aborted due to stage failure: Task 1 in stage 2.0 failed 1 times, most recent failure: Lost task 1.0 in stage 2.0 (TID 5, hadoop222, executor driver): java.lang.ArrayIndexOutOfBoundsException: 6
at contest3.demo_02$.$anonfun$main$2(demo_02.scala:19)
at contest3.demo_02$.$anonfun$main$2$adapted(demo_02.scala:17)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:513)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
2021-12-03 16:03:51,005 INFO [org.apache.spark.scheduler.DAGScheduler] - Job 2 failed: count at demo_02.scala:30, took 0.106103 s
2021-12-03 16:03:51,011 INFO [org.apache.spark.SparkContext] - Invoking stop() from shutdown hook
2021-12-03 16:03:51,018 INFO [org.sparkproject.jetty.server.AbstractConnector] - Stopped Spark@49f5c307{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 2.0 failed 1 times, most recent failure: Lost task 1.0 in stage 2.0 (TID 5, hadoop222, executor driver): java.lang.ArrayIndexOutOfBoundsException: 6
at contest3.demo_02$.$anonfun$main$2(demo_02.scala:19)
at contest3.demo_02$.$anonfun$main$2$adapted(demo_02.scala:17)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:513)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:752)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2093)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2133)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2158)
at org.apache.spark.rdd.RDD.count(RDD.scala:1227)
at contest3.demo_02$.main(demo_02.scala:30)
at contest3.demo_02.main(demo_02.scala)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 6
at contest3.demo_02$.$anonfun$main$2(demo_02.scala:19)
at contest3.demo_02$.$anonfun$main$2$adapted(demo_02.scala:17)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:513)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-12-03 16:03:51,020 INFO [org.apache.spark.ui.SparkUI] - Stopped Spark web UI at http://hadoop222:4040
2021-12-03 16:03:51,040 INFO [org.apache.spark.MapOutputTrackerMasterEndpoint] - MapOutputTrackerMasterEndpoint stopped!
2021-12-03 16:03:51,055 INFO [org.apache.spark.storage.memory.MemoryStore] - MemoryStore cleared
2021-12-03 16:03:51,056 INFO [org.apache.spark.storage.BlockManager] - BlockManager stopped
2021-12-03 16:03:51,064 INFO [org.apache.spark.storage.BlockManagerMaster] - BlockManagerMaster stopped
2021-12-03 16:03:51,067 INFO [org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint] - OutputCommitCoordinator stopped!
2021-12-03 16:03:51,074 INFO [org.apache.spark.SparkContext] - Successfully stopped SparkContext
2021-12-03 16:03:51,074 INFO [org.apache.spark.util.ShutdownHookManager] - Shutdown hook called
2021-12-03 16:03:51,075 INFO [org.apache.spark.util.ShutdownHookManager] - Deleting directory /tmp/spark-654a40aa-e99b-45cd-bc91-823a5b2711a8
进程已结束,退出代码为 1
Spark程序运行报错
- 写回答
- 好问题 0 提建议
- 追加酬金
- 关注问题
- 邀请回答
-
1条回答 默认 最新
- 达娃里氏 2021-12-03 16:23关注
看报错啊,“java.lang.ArrayIndexOutOfBoundsException”。你java数组的索引超过边界了。。。假设你声明了一个包含八个元素的数组,那数组元素的下标就是0到7,而你却用大于等于8的下标去获取数组内元素,程序自然会报错。。。
解决评论 打赏 举报无用 1
悬赏问题
- ¥15 stata安慰剂检验作图但是真实值不出现在图上
- ¥15 c程序不知道为什么得不到结果
- ¥40 复杂的限制性的商函数处理
- ¥15 程序不包含适用于入口点的静态Main方法
- ¥15 素材场景中光线烘焙后灯光失效
- ¥15 请教一下各位,为什么我这个没有实现模拟点击
- ¥15 执行 virtuoso 命令后,界面没有,cadence 启动不起来
- ¥50 comfyui下连接animatediff节点生成视频质量非常差的原因
- ¥20 有关区间dp的问题求解
- ¥15 多电路系统共用电源的串扰问题