m0_61386826 2023-01-17 10:39 采纳率: 50%
浏览 69
已结题

Atlas关联spark插件编译报错

编译spark-atlas-connector时的报错如下,网上能找到的所有方法都已尝试,没用:

[root@cdh1 spark-atlas-connector-master]# mvn clean scala:compile compile package
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO] 
[INFO] spark-atlas-connector-main_2.11                                    [pom]
[INFO] spark-atlas-connector_2.11                                         [jar]
[INFO] spark-atlas-connector-assembly                                     [jar]
[INFO] 
[INFO] -------< com.hortonworks.spark:spark-atlas-connector-main_2.11 >--------
[INFO] Building spark-atlas-connector-main_2.11 0.1.0-SNAPSHOT            [1/3]
[INFO] --------------------------------[ pom ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:1.4.1:enforce (clean) @ spark-atlas-connector-main_2.11 ---
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ spark-atlas-connector-main_2.11 ---
[INFO] Deleting /opt/myjar/spark-hook/spark-atlas-connector-master/target
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (default-cli) @ spark-atlas-connector-main_2.11 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-enforcer-plugin:1.4.1:enforce (enforce-versions) @ spark-atlas-connector-main_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.4.1:enforce (default) @ spark-atlas-connector-main_2.11 ---
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (default) @ spark-atlas-connector-main_2.11 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-enforcer-plugin:1.4.1:enforce (enforce-versions) @ spark-atlas-connector-main_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.4.1:enforce (default) @ spark-atlas-connector-main_2.11 ---
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (default) @ spark-atlas-connector-main_2.11 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (pre-test-clean) @ spark-atlas-connector-main_2.11 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-atlas-connector-main_2.11 ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /opt/myjar/spark-hook/spark-atlas-connector-master/target/tmp
[INFO] Executed tasks
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:testCompile (default) @ spark-atlas-connector-main_2.11 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (test) @ spark-atlas-connector-main_2.11 ---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Discovery starting.
Discovery completed in 49 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 167 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ----------< com.hortonworks.spark:spark-atlas-connector_2.11 >----------
[INFO] Building spark-atlas-connector_2.11 0.1.0-SNAPSHOT                 [2/3]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ spark-atlas-connector_2.11 ---
[INFO] Deleting /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/target
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (default-cli) @ spark-atlas-connector_2.11 ---
[WARNING] Zinc server is not available at port 3030 - reverting to normal incremental compile
[INFO] Using incremental compilation
[INFO] Compiling 28 Scala sources to /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/target/classes...
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:34: object DataSourceV2ScanExec is not a member of package org.apache.spark.sql.execution.datasources.v2
[ERROR] import org.apache.spark.sql.execution.datasources.v2.{DataSourceV2Relation, DataSourceV2ScanExec, WriteToDataSourceV2Exec}
[ERROR]        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:35: object MicroBatchWriter is not a member of package org.apache.spark.sql.execution.streaming.sources
[ERROR] import org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter
[ERROR]        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:36: object v2 is not a member of package org.apache.spark.sql.sources
[ERROR] import org.apache.spark.sql.sources.v2.writer.DataSourceWriter
[ERROR]                                     ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:144: value tableIdentifier is not a member of org.apache.spark.sql.catalyst.analysis.UnresolvedRelation
[ERROR]         case r: UnresolvedRelation => Seq(prepareEntity(r.tableIdentifier))
[ERROR]                                                           ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:567: not found: type DataSourceWriter
[ERROR]     def unapply(r: DataSourceWriter): Option[SACAtlasEntityWithDependencies] = r match {
[ERROR]                    ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:193: value writer is not a member of org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec
[ERROR]       val outputEntities = node.writer match {
[ERROR]                                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:194: not found: type MicroBatchWriter
[ERROR]         case w: MicroBatchWriter if w.writer.isInstanceOf[SinkDataSourceWriter] =>
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:295: not found: type DataSourceWriter
[ERROR]   private def discoverOutputEntities(writer: DataSourceWriter): Seq[SACAtlasReferenceable] = {
[ERROR]                                              ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:400: not found: type DataSourceWriter
[ERROR]     def unapply(writer: DataSourceWriter): Option[SACAtlasReferenceable] = {
[ERROR]                         ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:430: not found: type DataSourceWriter
[ERROR]     def getHWCEntity(r: DataSourceWriter): Option[SACAtlasReferenceable] = r match {
[ERROR]                         ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:313: value writer is not a member of org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec
[ERROR]       val outputEntities = HWCEntities.getHWCEntity(node.writer) match {
[ERROR]                                                          ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:323: type mismatch;
 found   : Seq[Any]
 required: Seq[com.hortonworks.spark.atlas.SACAtlasReferenceable]
[ERROR]         Seq(internal.updateMLProcessToEntity(inputsEntities, outputEntities, logMap))
[ERROR]                                                              ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:326: type mismatch;
 found   : Seq[Any]
 required: Seq[com.hortonworks.spark.atlas.SACAtlasReferenceable]
[ERROR]         Seq(internal.etlProcessToEntity(inputsEntities, outputEntities, logMap))
[ERROR]                                                         ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:366: not found: type DataSourceWriter
[ERROR]           writer: DataSourceWriter): Option[SACAtlasReferenceable] = writer match {
[ERROR]                   ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:367: not found: type DataSourceWriter
[ERROR]         case w: DataSourceWriter
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:370: not found: type DataSourceWriter
[ERROR]         case w: DataSourceWriter
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:373: not found: type DataSourceWriter
[ERROR]         case w: DataSourceWriter
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:376: not found: type MicroBatchWriter
[ERROR]         case w: MicroBatchWriter
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:380: not found: type DataSourceWriter
[ERROR]             w.getClass.getMethod("writer").invoke(w).asInstanceOf[DataSourceWriter])
[ERROR]                                                                   ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:388: value source is not a member of org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation
[ERROR]           if ds.source.getClass.getCanonicalName.endsWith(HWCSupport.BATCH_READ_SOURCE) =>
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:394: not found: type DataSourceV2ScanExec
[ERROR]       case ds: DataSourceV2ScanExec
[ERROR]                ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:395: value source is not a member of org.apache.spark.sql.execution.SparkPlan
[ERROR]           if ds.source.getClass.getCanonicalName.endsWith(HWCSupport.BATCH_READ_SOURCE) =>
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:396: value options is not a member of org.apache.spark.sql.execution.SparkPlan
[ERROR]         getHWCEntity(ds.options)
[ERROR]                         ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:414: value tableIdentifier is not a member of org.apache.spark.sql.catalyst.analysis.UnresolvedRelation
[ERROR]             val db = r.tableIdentifier.database.getOrElse(
[ERROR]                        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:416: value tableIdentifier is not a member of org.apache.spark.sql.catalyst.analysis.UnresolvedRelation
[ERROR]             val tableName = r.tableIdentifier.table
[ERROR]                               ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:461: not found: type MicroBatchWriter
[ERROR]       case w: MicroBatchWriter
[ERROR]               ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:464: not found: type DataSourceWriter
[ERROR]         getHWCEntity(w.getClass.getMethod("writer").invoke(w).asInstanceOf[DataSourceWriter])
[ERROR]                                                                            ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:39: object v2 is not a member of package org.apache.spark.sql.sources
[ERROR] import org.apache.spark.sql.sources.v2.DataSourceV2
[ERROR]                                     ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:182: not found: type DataSourceV2
[ERROR]   def isKafkaRelationProvider(source: DataSourceV2): Boolean = {
[ERROR]                                       ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:559: not found: type DataSourceV2ScanExec
[ERROR]         case r: DataSourceV2ScanExec =>
[ERROR]                 ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:30: object DataSourceV2ScanExec is not a member of package org.apache.spark.sql.execution.datasources.v2
[ERROR] import org.apache.spark.sql.execution.datasources.v2.{DataSourceRDDPartition, DataSourceV2ScanExec}
[ERROR]        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:89: not found: type DataSourceV2ScanExec
[ERROR]   def extractSourceTopicsFromDataSourceV2(r: DataSourceV2ScanExec): Seq[KafkaTopicInformation] = {
[ERROR]                                              ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/CommandsHarvester.scala:568: not found: type MicroBatchWriter
[ERROR]       case writer: MicroBatchWriter => ExtractFromDataSource.extractTopic(writer) match {
[ERROR]                    ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:31: object MicroBatchWriter is not a member of package org.apache.spark.sql.execution.streaming.sources
[ERROR] import org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter
[ERROR]        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:66: not found: type MicroBatchWriter
[ERROR]   def extractTopic(writer: MicroBatchWriter): Option[KafkaTopicInformation] = {
[ERROR]                            ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:29: object MicroBatchWriter is not a member of package org.apache.spark.sql.execution.streaming.sources
[ERROR] import org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter
[ERROR]        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:31: object v2 is not a member of package org.apache.spark.sql.sources
[ERROR] import org.apache.spark.sql.sources.v2.writer.{DataWriterFactory, WriterCommitMessage}
[ERROR]                                     ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:38: object v2 is not a member of package org.apache.spark.sql.sources
[ERROR] import org.apache.spark.sql.sources.v2.writer.streaming.StreamWriter
[ERROR]                                     ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:173: not found: type StreamWriter
[ERROR]   class SinkDataSourceWriter(val sinkProgress: SinkProgress) extends StreamWriter {
[ERROR]                                                                      ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:83: value ++= is not a member of Seq[org.apache.spark.sql.execution.SparkPlan]
  Expression does not convert to assignment because:
    not found: type MicroBatchWriter
    expansion: outNodes = outNodes.++(Seq(WriteToDataSourceV2Exec(new <MicroBatchWriter: error>(0, new SinkDataSourceWriter(sink)), qd.qe.sparkPlan)))
[ERROR]       outNodes ++= Seq(
[ERROR]                ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:100: not found: value PersistedView
[ERROR]               case PersistedView =>
[ERROR]                    ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:174: not found: type DataWriterFactory
[ERROR]     override def createWriterFactory(): DataWriterFactory[InternalRow] =
[ERROR]                                         ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:177: not found: type WriterCommitMessage
[ERROR]     override def commit(messages: Array[WriterCommitMessage]): Unit =
[ERROR]                                         ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:180: not found: type WriterCommitMessage
[ERROR]     override def abort(messages: Array[WriterCommitMessage]): Unit =
[ERROR]                                        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:183: not found: type WriterCommitMessage
[ERROR]     override def commit(epochId: Long, messages: Array[WriterCommitMessage]): Unit =
[ERROR]                                                        ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/sql/SparkExecutionPlanProcessor.scala:186: not found: type WriterCommitMessage
[ERROR]     override def abort(epochId: Long, messages: Array[WriterCommitMessage]): Unit =
[ERROR]                                                       ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/utils/SparkUtils.scala:212: value getExecutionList is not a member of org.apache.spark.sql.hive.thriftserver.ui.HiveThriftServer2Listener
[ERROR]         val sessId = listener.getExecutionList.reverseIterator
[ERROR]                               ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:32: object v2 is not a member of package org.apache.spark.sql.sources
[ERROR] import org.apache.spark.sql.sources.v2.reader.InputPartition
[ERROR]                                     ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:33: object v2 is not a member of package org.apache.spark.sql.sources
[ERROR] import org.apache.spark.sql.sources.v2.writer.DataWriterFactory
[ERROR]                                     ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:233: not found: type DataWriterFactory
[ERROR]       writer: DataWriterFactory[InternalRow])
[ERROR]               ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:197: not found: type DataWriterFactory
[ERROR]   private def isKafkaStreamWriterFactory(writer: DataWriterFactory[InternalRow]): Boolean = {
[ERROR]                                                  ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:213: not found: type InputPartition
[ERROR]   private def isKafkaMicroBatchInputPartition(p: InputPartition[_]): Boolean = {
[ERROR]                                                  ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:217: not found: type InputPartition
[ERROR]   private def isKafkaContinuousInputPartition(p: InputPartition[_]): Boolean = {
[ERROR]                                                  ^
[WARNING] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:244: non-variable type argument String in type pattern java.util.Map[String,Object] is unchecked since it is eliminated by erasure
[WARNING]           case p: util.Map[String, Object] =>
[WARNING]                        ^
[WARNING] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:247: non-variable type argument String in type pattern scala.collection.immutable.Map[String,String] (the underlying of Map[String,String]) is unchecked since it is eliminated by erasure
[WARNING]           case p: Map[String, String] => Some(p)
[WARNING]                   ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:309: not found: type InputPartition
[ERROR]       p: InputPartition[_]): Option[(String, Option[String])] = {
[ERROR]          ^
[ERROR] /opt/myjar/spark-hook/spark-atlas-connector-master/spark-atlas-connector/src/main/scala/org/apache/spark/sql/kafka010/atlas/ExtractFromDataSource.scala:335: not found: type InputPartition
[ERROR]       p: InputPartition[_]): Option[(String, Option[String])] = {
[ERROR]          ^
[WARNING] two warnings found
[ERROR] 55 errors found
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for spark-atlas-connector-main_2.11 0.1.0-SNAPSHOT:
[INFO] 
[INFO] spark-atlas-connector-main_2.11 .................... SUCCESS [  4.585 s]
[INFO] spark-atlas-connector_2.11 ......................... FAILURE [  6.897 s]
[INFO] spark-atlas-connector-assembly ..................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  11.686 s
[INFO] Finished at: 2023-01-17T15:22:47+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile (default-cli) on project spark-atlas-connector_2.11: Execution default-cli of goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile failed.: CompileFailed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :spark-atlas-connector_2.11
  • 写回答

3条回答 默认 最新

  • 2301_76287955 2023-01-18 17:29
    关注

    这个是github上个人维护的spark-aql勾子程序,你的问题应该是设置高版本的spark和scala,造成了一些版本冲突,而且高版本中许多类都更新过了,当然找不到.
    (1)<import org.apache.spark.sql.execution.datasources.v2.{DataSourceV2Relation, DataSourceV2ScanExec, WriteToDataSourceV2Exec}>这行报错就是单纯的找不到DataSourceV2ScanExec,只要添加hadoop-common.*.jar慢慢调试到不出现这个错的版本就好了,其他关于CommandsHarvester.scala类中的报错自然也就没有了
    (2)然后修改spark-atlas-connector/src/main/scala/com/hortonworks/spark/atlas/utils/SparkUtils.scala
    修改部分:

    def currSessionUser(qe: QueryExecution): String = {
    currUser()
    /*
    // ok , i accept your suggestion
    val thriftServerListener = Option(HiveThriftServer2.listener)
    thriftServerListener match {
    case None => currUser()
    }
    */
    }
    修改之后SparkUtils.scala类中的报错自然消失
    ... 其他错误建议按照这种方式调试,建议使用idea.因为会报红,一目了然

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(2条)

报告相同问题?

问题事件

  • 系统已结题 1月26日
  • 已采纳回答 1月18日
  • 修改了问题 1月17日
  • 赞助了问题酬金15元 1月17日
  • 展开全部

悬赏问题

  • ¥15 对于知识的学以致用的解释
  • ¥50 三种调度算法报错 有实例
  • ¥15 关于#python#的问题,请各位专家解答!
  • ¥200 询问:python实现大地主题正反算的程序设计,有偿
  • ¥15 smptlib使用465端口发送邮件失败
  • ¥200 总是报错,能帮助用python实现程序实现高斯正反算吗?有偿
  • ¥15 对于squad数据集的基于bert模型的微调
  • ¥15 为什么我运行这个网络会出现以下报错?CRNN神经网络
  • ¥20 steam下载游戏占用内存
  • ¥15 CST保存项目时失败