ttt_cw 2021-12-03 11:35 采纳率: 25%
浏览 96
已结题

Flink SQL查询报错

Could not execute SQL statement. Reason:
java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V

版本情况

hadoop3.1.3
hive3.1.2
flink1.12.0

环境变量

# HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.1.3
export HADOOP_CLASSPATH=`hadoop classpath`
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

hive 和 hadoop 下的guava版本一致

.../hadoop-3.1.3/share/hadoop/common/lib/*guava*
guava-27.0-jre.jar
listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar

.../hive/lib/*guava*
guava-27.0-jre.jar
jersey-guava-2.25.1.jar
listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar

hive关键配置如下

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://192.168.0.182:3306/metastore?createDatabaseIfNotExist=true</value>
    </property>
    <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/hive/warehouse</value>
    </property>
    <property>
        <name>hive.server2.thrift.port</name>
        <value>10000</value>
    </property>
    <property>
        <name>hive.server2.thrift.bind.host</name>
        <value>node04</value>
    </property>
<property>
        <name>hive.server2.active.passive.ha.enable</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.metastore.event.db.notification.api.auth</name>
        <value>false</value>
    </property>
    <property>
        <name>hive.cli.print.header</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.cli.print.current.db</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.metastore.schema.verification</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.schema.autoCreateAll</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.metastore.uris</name>
        <value>thrift://node04:9083</value>
    </property>
</configuration>

Flink lib下的关键jar包

flink-connector-hive_2.12-1.12.0.jar
flink-hadoop-compatibility_2.12-1.12.0.jar
libfb303-0.9.3.jar
hive-exec-3.1.2.jar

Flink SQL Client配置如下

catalogs: #  [] # empty list
   - name: myhive
     type: hive
     hive-conf-dir: /opt/module/hive/conf
     default-database: default

execution:
  planner: blink
  type: streaming
 ...
  current-catalog: myhive
  current-database: default

使用Flink sql查询

bin/sql-client.sh embedded

Flink SQL> show databases;
data_center_sharing
default

Flink SQL> select * from data_center_sharing.ods_resource_ship_info where ship_reg_no=270704001486;
2021-12-03 11:14:19,845 INFO  org.apache.hadoop.hive.metastore.HiveMetaStoreClient         [] - Trying to connect to metastore with URI thrift://node04:9083
2021-12-03 11:14:19,845 INFO  org.apache.hadoop.hive.metastore.HiveMetaStoreClient         [] - Opened a connection to metastore, current connections: 2
2021-12-03 11:14:19,891 INFO  org.apache.hadoop.hive.metastore.HiveMetaStoreClient         [] - Connected to metastore.
2021-12-03 11:14:19,891 INFO  org.apache.hadoop.hive.metastore.RetryingMetaStoreClient     [] - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.metastore.HiveMetaStoreClient ugi=pdiwt (auth:SIMPLE) retries=1 delay=1 lifetime=0
2021-12-03 11:14:20,007 INFO  org.apache.hadoop.hive.metastore.HiveMetaStoreClient         [] - Closed a connection to metastore, current connections: 1
2021-12-03 11:14:20,069 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input files to process : 2
2021-12-03 11:14:20,079 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2021-12-03 11:14:20,089 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input files to process : 2
2021-12-03 11:14:20,098 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2021-12-03 11:14:20,116 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input files to process : 2
2021-12-03 11:14:20,119 INFO  org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient [] - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
[ERROR] Could not execute SQL statement. Reason:
java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V

Flink SQL> 

相关日志

Caused by: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) ~[hadoop-common-3.1.3.jar:?]
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338) ~[hadoop-common-3.1.3.jar:?]
    at org.apache.hadoop.conf.Configuration.readFields(Configuration.java:3798) ~[hadoop-common-3.1.3.jar:?]
    at org.apache.flink.connectors.hive.JobConfWrapper.readObject(JobConfWrapper.java:67) ~[flink-connector-hive_2.12-1.12.0.jar:1.12.0]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_291]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_291]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_291]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_291]
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2294) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2185) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1665) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2403) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2327) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2185) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1665) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2403) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2327) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2185) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1665) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2403) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2327) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2185) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1665) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:501) ~[?:1.8.0_291]
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:459) ~[?:1.8.0_291]
    at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:562) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:550) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:58) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:320) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:215) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:827) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:237) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:291) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:256) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:238) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:134) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:108) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:323) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:310) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:96) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:41) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:141) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:80) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:450) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
    at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) ~[?:1.8.0_291]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_291]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_291]
    at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_291]

  • 写回答

2条回答 默认 最新

  • 新民工涛哥 2021-12-05 08:35
    关注
    1. 找不到方法一般是打包时缺少该jar包,或者jar冲突。
    2. jar 冲突时,可以exclude其中一个jar依赖或者重新打包flink-hive-connector 通过 maven 插件重命名guava jar包
    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

问题事件

  • 系统已结题 12月23日
  • 已采纳回答 12月15日
  • 创建了问题 12月3日

悬赏问题

  • ¥15 求高通平台Softsim调试经验
  • ¥15 canal如何实现将mysql多张表(月表)采集入库到目标表中(一张表)?
  • ¥15 wpf ScrollViewer实现冻结左侧宽度w范围内的视图
  • ¥15 栅极驱动低侧烧毁MOSFET
  • ¥30 写segy数据时出错3
  • ¥100 linux下qt运行QCefView demo报错
  • ¥50 F1C100S下的红外解码IR_RX驱动问题
  • ¥20 基于matlab的航迹融合 航迹关联 航迹插补
  • ¥15 用Matlab实现图中的光线追迹
  • ¥15 联想笔记本开机出现系统更新界面