H EL 2023-02-09 09:21 采纳率: 66.7%
浏览 22

spark DDL union后插入Doris报spark.driver.maxResultSize

使用DDL把DataFrame(由sparksql转化,注:sparksql中也做了多个union all操作) 做union处理后向Doris插入数据,报spark.driver.maxResultSize ,请big old讲解一下问题的原因以及如果优化,感谢


  private def processRepartition(dateframe1:DataFrame,dataFrame2: DataFrame,dataFrame3: DataFrame,dataFrame4: DataFrame
                                ,dateframe5:DataFrame,dataFrame6: DataFrame,dataFrame7: DataFrame):DataFrame={
    dateframe1.union(dataFrame2).union(dataFrame3).union(dataFrame4).union(dateframe5).union(dataFrame6).union(dataFrame7).repartition(200)
      .persist(StorageLevel.MEMORY_AND_DISK)
  }

23/02/09 09:12:13 INFO cluster.YarnScheduler: Removed TaskSet 92.0, whose tasks have all completed, from pool
23/02/09 09:12:13 INFO scheduler.TaskSetManager: Finished task 15400.0 in stage 97.0 (TID 120846) in 63 ms on worker40.center.testname (executor 6) (28474/40000)
23/02/09 09:12:13 INFO scheduler.DAGScheduler: Job 20 failed: foreachPartition at DorisSourceProvider.scala:68, took 262.238821 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 39091 tasks (3.0 GB) is bigger than spark.driver.maxResultSize (3.0 GB)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1890)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1877)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:929)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:929)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:929)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2111)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2060)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2049)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:740)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2081)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2102)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2121)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2146)
        at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:935)
        at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:933)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
        at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:933)
        at org.apache.doris.spark.sql.DorisSourceProvider.createRelation(DorisSourceProvider.scala:68)
        at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
        at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
        at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
        at com.testname.common.util.DataBaseUtil$.insertIntoDorisBySpark(DataBaseUtil.scala:63)
        at com.testname.dmk.sales.cont.TableContractAnalyseBuild$.main(TableContractAnalyseBuild.scala:164)
        at com.testname.dmk.sales.cont.TableContractAnalyseBuild.main(TableContractAnalyseBuild.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/02/09 09:12:13 INFO scheduler.TaskSetManager: Finished task 13313.0 in stage 110.0 (TID 120836) in 100 ms on worker07.center.testname (executor 4) (13718/40000)
23/02/09 09:12:13 INFO scheduler.TaskSetManager: Finished task 21.0 in stage 145.0 (TID 120859) in 55 ms on worker55.center.testname (executor 5) (127/200)
23/02/09 09:12:13 INFO scheduler.TaskSetManager: Finished task 15401.0 in stage 97.0 (TID 120853) in 65 ms on worker40.center.testname (executor 6) (28475/40000)
23/02/09 09:12:13 INFO cluster.YarnScheduler: Removed TaskSet 97.0, whose tasks have all completed, from pool
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 33566.0 in stage 97.0 (TID 120849, worker36.center.testname, executor 1): TaskKilled (Stage cancelled)
23/02/09 09:12:13 INFO cluster.YarnScheduler: Removed TaskSet 97.0, whose tasks have all completed, from pool
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 13321.0 in stage 110.0 (TID 120864, worker40.center.testname, executor 6): TaskKilled (Stage cancelled)
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 13322.0 in stage 110.0 (TID 120867, worker40.center.testname, executor 6): TaskKilled (Stage cancelled)
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 15307.0 in stage 97.0 (TID 120843, worker34.center.testname, executor 2): TaskKilled (Stage cancelled)
23/02/09 09:12:13 INFO cluster.YarnScheduler: Removed TaskSet 97.0, whose tasks have all completed, from pool
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 33567.0 in stage 97.0 (TID 120854, worker36.center.testname, executor 1): TaskKilled (Stage cancelled)
23/02/09 09:12:13 INFO cluster.YarnScheduler: Removed TaskSet 97.0, whose tasks have all completed, from pool
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 13324.0 in stage 110.0 (TID 120871, worker34.center.testname, executor 2): TaskKilled (Stage cancelled)
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 13326.0 in stage 110.0 (TID 120873, worker34.center.testname, executor 2): TaskKilled (Stage cancelled)
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 13325.0 in stage 110.0 (TID 120872, worker34.center.testname, executor 2): TaskKilled (Stage cancelled)
23/02/09 09:12:13 WARN scheduler.TaskSetManager: Lost task 13318.0 in stage 110.0 (TID 120847, worker07.center.testname, executor 4): TaskKilled (Stage cancelled)

  • 写回答

2条回答 默认 最新

  • 「已注销」 2023-02-09 10:09
    关注

    你把报错发给我看一下

    评论

报告相同问题?

问题事件

  • 修改了问题 2月9日
  • 创建了问题 2月9日

悬赏问题

  • ¥15 关于#.net#的问题:End Function
  • ¥50 用AT89C52单片机设计一个温度测量与控制电路
  • ¥15 无法import pycausal
  • ¥15 VS2022创建MVC framework提示:预安装的程序包具有对缺少的注册表值的引用
  • ¥15 weditor无法连接模拟器Local server not started, start with?
  • ¥20 6-3 String类定义
  • ¥15 嵌入式--定时器使用
  • ¥20 51单片机学习中的问题
  • ¥30 Windows Server 2016利用兩張網卡處理兩個不同網絡
  • ¥15 Python中knn问题