deaftstill
2021-09-11 12:47
采纳率: 0%
浏览 74

spark从hive写入sqlserver报java.lang.NullPointerException

错误内容如下:


java.lang.NullPointerException
    at com.microsoft.jdbc.sqlserver.tds.TDSRPCParameter.initializeUserParam(Unknown Source)
    at com.microsoft.jdbc.sqlserver.SQLServerImplStatement.addUserParametersToRPC(Unknown Source)
    at com.microsoft.jdbc.sqlserver.SQLServerImplStatement.execute(Unknown Source)
    at com.microsoft.jdbc.base.BaseStatement.commonExecute(Unknown Source)
    at com.microsoft.jdbc.base.BasePreparedStatement.executeBatchEmulation(Unknown Source)
    at com.microsoft.jdbc.base.BasePreparedStatement.executeBatch(Unknown Source)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:771)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:933)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:933)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

spark写入sqlserver如下

    df.write.format("jdbc").mode(SaveMode.Append)
      .option("driver","com.microsoft.jdbc.sqlserver.SQLServerDriver")
      .option("url","jdbc:microsoft:sqlserver://<ServerName>;databaseName=xxx")
      .option("dbtable","")
      .option("user","")
      .option("password","")
      .option("batchsize","")
      .save()

驱动:
mssqlserver.jar
msbase.jar
msutil.jar

当使用spark向sqlserver2000以varchar类型插入字符串类型的数据时,为了排除不是配置信息的错误,使用同样的配置信息用int类型的数据插入没有报错,想问一下这种问题该怎么解决

  • 收藏

4条回答 默认 最新

相关推荐 更多相似问题