baidu_24473805
天下无双418
采纳率100%
2017-06-06 08:21 阅读 3.4k
已采纳

spark通过jdbc读取hive的表报错,我是在zeppelin里运行的

10

代码:

import org.apache.spark.sql.hive.HiveContext
val pro = new java.util.Properties()
pro.setProperty("user", "****")
pro.setProperty("password", "*****")
val driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";
Class.forName(driverName);
val hiveContext = new HiveContext(sc)

val hivetable = hiveContext.read.jdbc("jdbc:hive://*****/default", "*****", pro);

错误:

import org.apache.spark.sql.hive.HiveContext
pro: java.util.Properties = {}
res15: Object = null
res16: Object = null
driverName: String = org.apache.hadoop.hive.jdbc.HiveDriver
res17: Class[_] = class org.apache.hadoop.hive.jdbc.HiveDriver
warning: there was one deprecation warning; re-run with -deprecation for details
hiveContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@14f9cc13
java.sql.SQLException: Method not supported
at org.apache.hadoop.hive.jdbc.HiveResultSetMetaData.isSigned(Unknown Source)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:232)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:64)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:113)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:166)
... 46 elided

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享

1条回答 默认 最新

  • 已采纳
    baidu_24473805 天下无双418 2017-06-07 03:32

    在目前Hive 1.2的JDBC版本里面,使用它会报错:java.sql.SQLException: Method not supported at org.apache.hive.jdbc.HiveResultSetMetaData.isSigned,这是因为在目前的JDBC版本里,甚至以后的Hive 2.0版本里,isSigned这个方法都没有做实现,并且在Spark 1.5及以上版本里,这个方法被Spark SQL的resolveTable所调用,所以在这些版本的Spark里,这种方式都无法使用,低版本的Spark或许可以。

    点赞 评论 复制链接分享

相关推荐