在kerberos环境下使用spark2访问hive报错

2019-05-13 21:27:07,394 [main] WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql - Self-test query [select "DB_ID" from "DBS"] failed; direct SQL is disabled
javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"".
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.(MetaStoreDirectSql.java:146)
at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:58)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:248)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:70)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:80)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247)
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:387)
at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331)
at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287)
at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895)
at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859)
at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521)
at org.apache.spark.sql.hive.client.HiveClientImpl.(HiveClientImpl.scala:204)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264)
at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68)
at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)
at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181)
at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101)
at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410)
at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55)
at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217)
at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)
NestedThrowablesStackTrace:
java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown Source)
at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1193)
at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:345)
at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:211)
at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:633)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1844)
at org.datanucleus.store.rdbms.query.SQLQuery.executeWithArray(SQLQuery.java:807)
at org.datanucleus.store.query.Query.execute(Query.java:1715)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:371)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.(MetaStoreDirectSql.java:146)
at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:58)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:248)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:70)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:80)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247)
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:387)
at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331)
at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287)
at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895)
at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859)
at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521)
at org.apache.spark.sql.hive.client.HiveClientImpl.(HiveClientImpl.scala:204)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264)
at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68)
at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)
at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181)
at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101)
at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410)
at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55)
at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217)
at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)
Caused by: ERROR 42X05: Table/View 'DBS' does not exist.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(Unknown Source)
at org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(Unknown Source)
at org.apache.derby.impl.sql.compile.FromList.bindTables(Unknown Source)
at org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(Unknown Source)
at org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(Unknown Source)
at org.apache.derby.impl.sql.compile.DMLStatementNode.bind(Unknown Source)
at org.apache.derby.impl.sql.compile.CursorNode.bindStatement(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source)
... 113 more
没加kerberos认证,然后报找不到库,我猜是权限不够,然后加了kerberos,又报java.lang.reflect.InvocationTargetException和Caused by:java.lang.NullPointerException

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
spark带有kerberos认证,client方式提交作业没有问题,cluster方式报错

求助:spark带有kerberos认证,client方式提交作业没有问题,cluster方式报错,希望大家积极给出建议,只要解决了就给C 集群环境: spark版本:2.4.0-cdh6.3.2 spark应用代码:依赖版本跟cdh环境全部一致 命令如下: ``` spark-submit --class cn.com.test.sparkReadOns --keytab spark.keytab --principal spark/hadoop-namenode2 --master yarn --deploy-mode cluster .... ``` 报错日志: ``` Exception in thread "main" java.lang.ExceptionInInitializerError at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105) at org.apache.spark.deploy.security.HiveDelegationTokenProvider.hiveConf(HiveDelegationTokenProvider.scala:49) at org.apache.spark.deploy.security.HiveDelegationTokenProvider.delegationTokensRequired(HiveDelegationTokenProvider.scala:72) at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:131) at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:130) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at org.apache.spark.deploy.security.HadoopDelegationTokenManager.obtainDelegationTokens(HadoopDelegationTokenManager.scala:130) at org.apache.spark.deploy.yarn.security.YARNHadoopDelegationTokenManager.obtainDelegationTokens(YARNHadoopDelegationTokenManager.scala:59) at org.apache.spark.deploy.yarn.Client.setupSecurityToken(Client.scala:310) at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:1014) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179) at org.apache.spark.deploy.yarn.Client.run(Client.scala:1135) at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1527) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 3.0.0-cdh6.3.2 at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:174) at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:139) at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:100) at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:368) ... 26 more ```

jdbc连接hive查询报错:local host is: (unknown);

**hive sql:** select row_number() over (order by trademark) id,* from iot.dr_ps_20171228 limit 1 **报错信息:** java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "xhw367":8032; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost **补充:** 1、local host不是我分着打的,这是异常信息抛出来的。 2、只有在用hive聚合函数才会出现这个问题,普通的select不会出现。 **已解决,详见:http://blog.csdn.net/tanglei6636/article/details/79161695**

hive on spark 和 spark sql 有啥区别?

hive on spark 和 spark sql 都是用spark引擎计算,个人觉得没啥区别。 网友说: hive on spark 是cloudera公司开发的,spark sql是spark开发的,这个算是区别吗? 写法不同? 请大神解答。

spark jdbc连接impala报错Method not supported

各位好 我的spark是2.1.0,用的hive-jdbc 2.1.0,现在写入impala的时候报以下错: java.sql.SQLException: Method not supported at org.apache.hive.jdbc.HivePreparedStatement.addBatch(HivePreparedStatement.java:75) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:589) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:925) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:923) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:923) at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2305) at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2305) at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2305) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765) at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2304) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:670) at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:77) at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:518) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215) at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:446) at com.aoyou.data.CustomerVisitProduct$.saveToHive(CustomerVisitProduct.scala:281) at com.aoyou.data.CustomerVisitProduct$.main(CustomerVisitProduct.scala:221) at com.aoyou.data.CustomerVisitProduct.main(CustomerVisitProduct.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.sql.SQLException: Method not supported at org.apache.hive.jdbc.HivePreparedStatement.addBatch(HivePreparedStatement.java:75) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:589) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 以下是代码实现 val sparkConf = new SparkConf().setAppName("save").set("spark.sql.crossJoin.enabled", "true"); val sparkSession = SparkSession .builder() .enableHiveSupport() .getOrCreate(); val dataframe = sparkSession.createDataFrame(rddSchema, new Row().getClass()) val property = new Properties(); property.put("user", "xxxxx") property.put("password", "xxxxx") dataframe.write.mode(SaveMode.Append).option("driver", "org.apache.hive.jdbc.HiveDriver").jdbc("jdbc:hive2://xxxx:21050/rawdata;auth=noSasl", "tablename", property) 请问这是怎么回事啊?感觉是驱动版本问题

SparkSql中读取hive中的表不能存在"."

val hiveDeptDF = sqlContext.read.table("emp_test.emp") 我要读取hive中emp_test中的emp表,报错不能包含“.” Exception in thread "main" org.apache.spark.sql.AnalysisException: Specifying database name or other qualifiers are not allowed for temporary tables. If the table name has dots (.) in it, please quote the table name with backticks (`).; at org.apache.spark.sql.catalyst.analysis.Catalog$class.getTableName(Catalog.scala:70) at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.getTableName(Catalog.scala:82) at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:104) at org.apache.spark.sql.DataFrameReader.table(DataFrameReader.scala:338) at Hive2Rdbms$.main(Hive2Rdbms.scala:16) at Hive2Rdbms.main(Hive2Rdbms.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) 我加上反引号后,又显示找不到该表。 hive库本身没问题

java使用Kerberos一段时间后过期了,怎么办?

我们的hadoop集群有Kerberos,然后我们用java访问了,用的keytab和krb5.conf的文件访问的 但是项目运行的好好的,过了24小时开始报错,无法访问集群上的hive和hdfs,提示票据不可用 请问该怎么办?具体怎么操作? ``` javax.security.sasl.SaslException: GSS initiate failed at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:196) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) at org.apache.commons.pool.impl.GenericObjectPool.addObject(GenericObjectPool.java:1691) at org.apache.commons.pool.impl.GenericObjectPool.ensureMinIdle(GenericObjectPool.java:1648) at org.apache.commons.pool.impl.GenericObjectPool.access$700(GenericObjectPool.java:192) at org.apache.commons.pool.impl.GenericObjectPool$Evictor.run(GenericObjectPool.java:1784) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 20 common frames omitted ```

CDH集群开启kerberos认证后,所有服务无法使用 ,报错信息SASL GSS ,这个怎么解决?

CDH集群开启kerberos认证后,所有服务无法使用,报错信息均为 n error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Ticket expired (32) - PROCESS_TGS)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. ![图片说明](https://img-ask.csdn.net/upload/202005/07/1588821168_152217.jpg) kerberos可以正常使用,kinit也好使。但cdh集群服务器全红色警告,全都是这一个报错信息![图片说明](https://img-ask.csdn.net/upload/202005/07/1588821261_922944.jpg) 这个是安装cms之前的所有准备工作,我上传到了金山文档 [CMS安装前的环境准备](https://kdocs.cn/l/sfQ2sOGTL [金山文档] 环境搭建步骤.docx "") 所有服务按照下面的步骤禁用kerberos认证就可以正常使用。这个问题困扰我了好多天了 ![图片说明](https://img-ask.csdn.net/upload/202005/07/1588821598_276288.jpg)

beeline连接hive建表时权限不足

直接用bin/hive启动操作hive是可以正常建表,用beeline连接时提示权限不足这是为什么?![图片说明](https://img-ask.csdn.net/upload/201809/19/1537327988_10403.jpg)

kettle连接hive2,连接正常,但是获取表信息报错

java.lang.reflect.InvocationTargetException: Problem encountered getting information from the database: org.pentaho.di.core.exception.KettleDatabaseException: Unable to retrieve database information because of an error Unable to get list of procedures from database meta-data: Unable to get list of rows from ResultSet : Error determining value metadata from SQL resultset metadata Method not supported at org.pentaho.di.ui.core.database.dialog.GetDatabaseInfoProgressDialog$1.run(GetDatabaseInfoProgressDialog.java:67) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:113) Caused by: org.pentaho.di.core.exception.KettleDatabaseException: Unable to retrieve database information because of an error Unable to get list of procedures from database meta-data: Unable to get list of rows from ResultSet : Error determining value metadata from SQL resultset metadata Method not supported kettle 版本为5.4, hive版本为1.2.1,hadoop版本为2.7.1

oozie调度java程序kerberos认证失败

大神们好: 我写了一个java程序,读取hdfs上实时文件然后解析存入hbase,程序放在hadoop集群上,然后使用oozie来定时调度这个程序,刚开始不是用kerberos安全认证,一切ok。 后来根据需要配置上kerberos安全验证了,就挂掉了。报错如下: ``` 2018-06-25 12:15:02,551 WARN [SimpleAsyncTaskExecutor-1] org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 2018-06-25 12:15:02,551 FATAL [SimpleAsyncTaskExecutor-1] org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:618) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:163) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:744) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:741) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:741) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:907) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:874) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:34070) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1594) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1411) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1211) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:988) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:905) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$100(AsyncProcess.java:615) at org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:597) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:974) at com.ailk.xdrloader.batch.writer.EnhanceFileItemWriter.writeHbaseByApi(EnhanceFileItemWriter.java:377) at com.ailk.xdrloader.batch.writer.EnhanceFileItemWriter.writeToHbase(EnhanceFileItemWriter.java:309) at com.ailk.xdrloader.batch.writer.EnhanceFileItemWriter.write(EnhanceFileItemWriter.java:103) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:133) at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:121) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213) at com.sun.proxy.$Proxy32.write(Unknown Source) at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:175) at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:151) at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:274) at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:199) at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:75) at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:406) at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:330) at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:133) at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:271) at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:81) at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:374) at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215) at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:144) at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:257) at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:200) at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:139) at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:136) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottlingRunnable.run(SimpleAsyncTaskExecutor.java:268) at java.lang.Thread.run(Thread.java:748) Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 58 more ``` 刚开始我以为是keytab或者krb5.conf 有问题,于是我把程序本地执行,又是一切ok的,就是通过oozie调度就有问题。请问大家,oozie调用,是有是需要特殊处理的地方么 login kerberos的代码如下: ``` public static Configuration getConfInstance() { if (conf == null) { synchronized (HbaseUtil.class) { if (conf != null) return conf; LOG.info("krb5 file:" + krb5Conf + "\tkeytab file:" + keyTab + "\tprincipal:" + principal); System.setProperty("java.security.krb5.conf", krb5Conf); conf = HBaseConfiguration.create(); conf.addResource("core-site.xml"); conf.addResource("hdfs-site.xml"); conf.set("keytab.file", keyTab); conf.set("kerberos.principal", principal); try { UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principal, keyTab); } catch (IOException e) { e.printStackTrace(); } } } return conf; } ```

使用livy提交spark任务失败

post地址:http://192.168.244.130:8998/batches body: {"file":"hdfs://192.168.244.130:9000/mdjar/modelcall-2.0.jar","className":"cn.com.cnpc.klmy.common.WordCount2"} 报错:xxx.ClassNotFoundException: cn.com.cnpc.klmy.common.WordCount2 请教各位大咖,我到底是哪里错了?大家有什么解决方案或者建议吗?望各位大咖不吝赐教!跪谢! 截图如下所示,图一:使用postman发送的截图,图二:livy的管理页面 图一:使用postman发送的截图 ![图片说明](https://img-ask.csdn.net/upload/201811/09/1541703626_743573.png) 图二:livy的管理页面 ![图片说明](https://img-ask.csdn.net/upload/201811/09/1541703430_346359.png) ``` 注:在linux服务使用spark-submit提交成功: ./spark-submit --master spark://192.168.244.130:7077 --class cn.com.cnpc.klmy.common.WordCount2 --executor-memory 1G --total-executor-cores 2 /root/modelcall-2.0.jar ```

kafka 集成 kerberos ,启动kafka报错

kafka 使用kerberos协议的时候,启动kakfa的时候报zookeeper校验不通过。 错误信息如下:![图片说明](https://img-ask.csdn.net/upload/201902/02/1549098295_69981.png) kerberos的用户密钥:![图片说明](https://img-ask.csdn.net/upload/201902/02/1549098396_900993.png) kerberos的etc/krb5.conf配置信息:[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EXAMPLE.COM default_tkt_enctypes = arcfour-hmac-md5 dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] EXAMPLE.COM = { kdc = 192.168.1.41 admin_server = 192.168.1.41 } [domain_realm] kafka = EXAMPLE.COM zookeeper = EXAMPLE.COM weiwei = EXAMPLE.COM 192.168.1.41 = EXAMPLE.COM 127.0.0.1 = EXAMPLE.COM kerberos 的var/kerberos/krb5kdc/kdc.conf的配置信息: [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] EXAMPLE.COM = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab max_renewable_life = 7d supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal } kafka的kafka_server_jaas.conf的配置信息: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/var/kerberos/krb5kdc/kafka.keytab" principal="kafka/weiwei@EXAMPLE.COM"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/var/kerberos/krb5kdc/kafka.keytab" principal="zookeeper/192.168.1.41@EXAMPLE.COM"; }; zookeeper_jaas.conf的配置信息: Server{ com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/var/kerberos/krb5kdc/kafka.keytab" principal="zookeeper/192.168.1.41@EXAMPLE.COM"; }; zookeeper.properties的新增配置信息: authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider requireClientAuthScheme=sasl jaasLoginRenew=3600000 server.properties 新增的配置信息: advertised.host.name=192.168.1.41 advertised.listeners=SASL_PLAINTEXT://192.168.1.41:9092 listeners=SASL_PLAINTEXT://192.168.1.41:9092 #listeners=PLAINTEXT://127.0.0.1:9093 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=GSSAPI sasl.enabled.mechanisms=GSSAPI sasl.kerberos.service.name=kafka zookeeper-server-start.sh 新增的配置信息 export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/home/shubei/Downloads/kafka_2.12-1.0.0/config/zookeeper_jaas.conf' kafka-server-start.sh 新增的配置信息: export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/home/shubei/Downloads/kafka_2.12-1.0.0/config/kafka_server_jaas.conf' 配置信息基本是这样,快过年了,小弟在线求救,再预祝大侠们新年快乐。 ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ```

sqoop1使用java报Can't get Kerberos principal renewer

全部代码如下 **sqoop1使用java api过Kerberos出现Can't get Master Kerberos principal for use as renewer ** ``` public class SqoopTest { public static void main(String[] args) throws Exception { // ================================================================= Configuration conf = new Configuration(); conf.set("fs.default.name", "hdfs://101.30.188.246:9000/");//设置HDFS服务地址 String keytabFile = "/home/hcj/tab/hdfs.keytab"; String principle = "hdfs@MSO.COM"; String krbConf = "/home/hcj/krb5.conf"; System.setProperty("java.security.krb5.conf", krbConf); conf.set("hadoop.security.authentication", "Kerberos"); //conf.setBoolean("fs.hdfs.imHADOpl.disable.cache", true); conf.set("keytab.file", keytabFile); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principle, keytabFile); // ================================================================= String[] arg = new String[] { // Oracle数据库信息 /* * sqoop export --connect jdbc:mysql://127.0.0.1:3306/test --username jamie --table * persons --export-dir /user/hive/warehouse/dw_api_server.db/persons2/ * --input-fields-terminated-by '\t' --input-lines-terminated-by '\n' */ "--connect","jdbc:mysql://114.115.156.37:3306/test", "--username","root", "--password","root", "--table","persons", "--m","1", "--export-dir","hdfs://101.30.188.246:9000/user/hive/warehouse/dw_api_server.db/persons/", "--input-fields-terminated-by","\t" //"-columns","id,city" }; String[] expandArguments = OptionsFileUtil.expandArguments(arg); SqoopTool tool = SqoopTool.getTool("export"); Configuration loadPlugins = SqoopTool.loadPlugins(conf); Sqoop sqoop = new Sqoop((com.cloudera.sqoop.tool.SqoopTool) tool, loadPlugins); int res = Sqoop.runSqoop(sqoop, expandArguments); if (res == 0) System.out.println ("成功"); } } ``` 报错 ``` java.io.IOException: Can't get Master Kerberos principal for use as renewer at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:133) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:166) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:322) at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:299) at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:440) at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931) at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80) at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at com.mshuoke.datagw.impl.sqoop.SqoopTest.main(SqoopTest.java:58) ``` 求解

linux 启动hive时报错

$beeline -u jdbc:hive2://192.168.141.142:10000 Connecting to jdbc:hive2://192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.Utils: Supplied authorities: 192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.Utils: Resolved authority: 192.168.141.142:10000 17/07/11 23:37:38 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://192.168.141.142:10000 17/07/11 23:37:39 ERROR jdbc.HiveConnection: Error opening session org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:156) at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:143) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:583) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:192) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:187) at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142) at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207) at org.apache.hive.beeline.Commands.connect(Commands.java:1149) at org.apache.hive.beeline.Commands.connect(Commands.java:1070) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:970) at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:707) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:757) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467) Error: Could not establish connection to jdbc:hive2://192.168.141.142:10000: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) (state=08S01,code=0) Beeline version 1.6.2 by Apache Hive 0: jdbc:hive2://192.168.141.142:10000 (closed)>

kerberos 报错 Run spy First

大家新年快乐~ 第一次用 kerberos v1.13,点击“浏览”找到 Winrar.exe 位置后,单击“inject”没事 发生,在Winrar的目录也找不到 .rep 文件,单击kerberos界面 .rep 路径后面的箭头时 ,报错“Run spy First”,请问各位这是怎么回事??

通过sqoop导出hive ORC格式表,是否可以不用启动Hive Metastore?

kerberos环境中,通过sqoop+hcatlog导出hive ORC格式表,是否可以不用启动Hive Metastore? 通过测试发现:在没有kerberos安全认证的环境下,通过sqoop导出ORC格式的数据时不用启动Hive Metastore。 但是在kerberos安全认证的环境下,通过sqoop导出hdfs上的ORC格式的数据就必须启动Hive Metastore。 有没有一种解决方法,在kerberos安全认证环境下导出ORC数据不用启动Hive Metastore?

开启kerberos,oozie执行作业失败

安全模式下,开启kerberos认证,在oozie命令行提交yarn作业执行失败 Client cannot authenticate via [TOkEN,KERBEROS ],Host details:localhost is "hst01";destination Host is "hst03":9000;

java连接hbase Kerberos 24小时报错问题

java连接hbase kerberos 24小时过期问题。求指点 ``` try { String krbStr = Thread.currentThread().getContextClassLoader().getResource("krb5.ini").getFile(); String keyStr = Thread.currentThread().getContextClassLoader().getResource(keytab).getFile(); System.setProperty("java.security.krb5.conf", krbStr); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(principal, keyStr); } catch (IOException e) { log.error(e); } try { HBaseAdmin admin = new HBaseAdmin(conf); if (!admin.tableExists(tableName)) { HTableDescriptor tableDescripter = new HTableDescriptor( tableName.getBytes()); tableDescripter.addFamily(new HColumnDescriptor("data")); admin.createTable(tableDescripter); } } catch (Exception e) { log.error( e); } ``` 以上代码刚运行时没问题,但是过了24小时左右admin.tableExists(tableName)这句就会报错,错误如下 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Sat Jul 13 14:30:29 CST 2019, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68090: row 'p_rsdisk,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=xx-xx-xx-xx-xx.indata.com,16020,1560266314048, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313) at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.filters.CorsFilter.handleNonCORS(CorsFilter.java:436) at org.apache.catalina.filters.CorsFilter.doFilter(CorsFilter.java:177) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68090: row 'p_rsdisk,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=xx-xx-xx-xx-xx.indata.com,16020,1560266314048, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: java.io.IOException: Could not set up IO Streams to xx-xx-xx-xx-xx.indata.com/xx.xx.xx.xx:16020 at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:777) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:885) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:854) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1180) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 4 more Caused by: java.lang.RuntimeException: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:677) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:635) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:743) ... 17 more Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:609) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:154) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:732) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:732) ... 17 more Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 26 more 求解原因,是因为票据过期吗?

cloudera manager上启用Kerberos失败问题求助

CDH版本5.7。 cloudera manager上启用Kerberos失败,配置文件如下: /etc/krb5.conf [libdefaults] default_realm = BIGDATA.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 7d #ticket_lifetime = 2147483647 renew_lifetime = 30d #renew_lifetime = 2147483647 forwardable = true #renewable = true [realms] BIGDATA.COM = { kdc =bigdata-m-003 admin_server = bigdata-m-003 } [domain_realm] .bigdata.com = BIGDATA.COM bigdata.com = BIGDATA.COM /var/kerberos/krb5kdc/kadm5.acl */admin@BIGDATA.COM * /var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] BIGDATA.COM= { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = rc4-hmac:normal aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal max_life = 1d 0h 0m 0s max_renewable_life = 5d 0h 0m 0s } Cloudera Manager启用Kerberos过程中基本采用默认配置,KDC类型MIT KDC,加密类型为rc4-hmac,Kerberos Principal最大可更新生命周期为0,勾选了通过Cloudera Manager管理krb5.conf,KDC Account Manager凭据导入无报错。但后续启用过程失败。 该问题前后分为两个现象: 1.当环境中主机名中还存在大写字母时,比如有主机名为Bigdata-m-001,hadoop各组件相关凭据生成正常,类似于HTTP/Bigdata-m-001@BIGDATA.COM,但是在组件启动时会说因为找不到HTTP/bigdata-m-001@BIGDATA.COM而启动失败。多方调查认为是主机名不该存在大写字母的问题。 2.主机名都改为小写以后,Cloudera Manager中所有主机名有关的配置也更新为小写了。不启用Kerberos时集群运行正常。启用Kerberos后,在生成丢失的凭据这一步只有以下凭据能够成功生成。其它凭据例如HTTP/bigdata-m-001@BIGDATA.COM全部无法生成,也没有任何报错。 ![Kerberos凭据页面](https://img-ask.csdn.net/upload/201708/02/1501641809_237542.jpg) 求各位大神看看是怎么回事,怎么解决。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解!

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解! 目录 博客声明 大数据了解博主粉丝 博主的粉丝群体画像 粉丝群体性别比例、年龄分布 粉丝群体学历分布、职业分布、行业分布 国内、国外粉丝群体地域分布 博主的近期访问每日增量、粉丝每日增量 博客声明 因近期博主写专栏的文章越来越多,也越来越精细,逐步优化文章。因此,最近一段时间,订阅博主专栏的人数增长也非常快,并且专栏价

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

培训班出来的人后来都怎么样了?(二)

接着上回说,培训班学习生涯结束了。后面每天就是无休止的背面试题,不是没有头脑的背,培训公司还是有方法的,现在回想当时背的面试题好像都用上了,也被问到了。回头找找面试题,当时都是打印下来天天看,天天背。 不理解呢也要背,面试造飞机,上班拧螺丝。班里的同学开始四处投简历面试了,很快就有面试成功的,刚开始一个,然后越来越多。不知道是什么原因,尝到胜利果实的童鞋,不满足于自己通过的公司,嫌薪水要少了,选择...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

工作八年,月薪60K,裸辞两个月,投简历投到怀疑人生!

近日,有网友在某职场社交平台吐槽,自己裸辞两个月了,但是找工作却让自己的心态都要崩溃了,全部无果,不是已查看无回音,就是已查看不符合。 “工作八年,两年一跳,裸辞两个月了,之前月薪60K,最近找工作找的心态崩了!所有招聘工具都用了,全部无果,不是已查看无回音,就是已查看不符合。进头条,滴滴之类的大厂很难吗???!!!投简历投的开始怀疑人生了!希望 可以收到大厂offer” 先来看看网...

97年世界黑客编程大赛冠军作品(大小仅为16KB),惊艳世界的编程巨作

这是世界编程大赛第一名作品(97年Mekka ’97 4K Intro比赛)汇编语言所写。 整个文件只有4095个字节, 大小仅仅为16KB! 不仅实现了3D动画的效果!还有一段震撼人心的背景音乐!!! 内容无法以言语形容,实在太强大! 下面是代码,具体操作看最后! @echo off more +1 %~s0|debug e100 33 f6 bf 0 20 b5 10 f3 a5...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

什么是a站、b站、c站、d站、e站、f站、g站、h站、i站、j站、k站、l站、m站、n站?00后的世界我不懂!

A站 AcFun弹幕视频网,简称“A站”,成立于2007年6月,取意于Anime Comic Fun,是中国大陆第一家弹幕视频网站。A站以视频为载体,逐步发展出基于原生内容二次创作的完整生态,拥有高质量互动弹幕,是中国弹幕文化的发源地;拥有大量超粘性的用户群体,产生输出了金坷垃、鬼畜全明星、我的滑板鞋、小苹果等大量网络流行文化,也是中国二次元文化的发源地。 B站 全称“哔哩哔哩(bilibili...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

MySQL性能优化(五):为什么查询速度这么慢

前期回顾: MySQL性能优化(一):MySQL架构与核心问题 MySQL性能优化(二):选择优化的数据类型 MySQL性能优化(三):深入理解索引的这点事 MySQL性能优化(四):如何高效正确的使用索引 前面章节我们介绍了如何选择优化的数据类型、如何高效的使用索引,这些对于高性能的MySQL来说是必不可少的。但这些还完全不够,还需要合理的设计查询。如果查询写的很糟糕,即使表结构再合理、索引再...

用了这个 IDE 插件,5分钟解决前后端联调!

点击上方蓝色“程序猿DD”,选择“设为星标”回复“资源”获取独家整理的学习资料!作者 |李海庆我是一个 Web 开发前端工程师,受到疫情影响,今天是我在家办公的第78天。开发了两周,...

大厂的 404 页面都长啥样?最后一个笑了...

每天浏览各大网站,难免会碰到404页面啊。你注意过404页面么?猿妹搜罗来了下面这些知名网站的404页面,以供大家欣赏,看看哪个网站更有创意: 正在上传…重新上传取消 腾讯 正在上传…重新上传取消 网易 淘宝 百度 新浪微博 正在上传…重新上传取消 新浪 京东 优酷 腾讯视频 搜...

【高并发】高并发秒杀系统架构解密,不是所有的秒杀都是秒杀!

网上很多的文章和帖子中在介绍秒杀系统时,说是在下单时使用异步削峰来进行一些限流操作,那都是在扯淡! 因为下单操作在整个秒杀系统的流程中属于比较靠后的操作了,限流操作一定要前置处理,在秒杀业务后面的流程中做限流操作是没啥卵用的。

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

Java14 新特性解读

Java14 已于 2020 年 3 月 17 号发布,官方特性解读在这里:https://openjdk.java.net/projects/jdk/14/以下是个人对于特性的中文式...

爬虫(101)爬点重口味的

小弟最近在学校无聊的很哪,浏览网页突然看到一张图片,都快流鼻血。。。然后小弟冥思苦想,得干一点有趣的事情python 爬虫库安装https://s.taobao.com/api?_ks...

疫情后北上广深租房价格跌了吗? | Alfred数据室

去年3月份我们发布了《北上广深租房图鉴》(点击阅读),细数了北上广深租房的各种因素对租房价格的影响。一年过去了,在面临新冠疫情的后续影响、城市尚未完全恢复正常运转、学校还没开学等情况下...

面试官给我挖坑:a[i][j] 和 a[j][i] 有什么区别?

点击上方“朱小厮的博客”,选择“设为星标”后台回复&#34;1024&#34;领取公众号专属资料本文以一个简单的程序开头——数组赋值:int LEN = 10000; int[][] ...

又一起程序员被抓事件

就在昨天互联网又发生一起让人心酸的程序员犯罪事件,著名的百度不限速下载软件 Pandownload PC 版作者被警方抓获。案件大致是这样的:软件的作者不仅非法盗取用户数据,还在QQ群进...

应聘3万的职位,有必要这么刁难我么。。。沙雕。。。

又一次被面试官带到坑里面了。面试官:springmvc用过么?我:用过啊,经常用呢面试官:springmvc中为什么需要用父子容器?我:嗯。。。没听明白你说的什么。面试官:就是contr...

太狠了,疫情期间面试,一个问题砍了我5000!

疫情期间找工作确实有点难度,想拿到满意的薪资,确实要点实力啊!面试官:Spring中的@Value用过么,介绍一下我:@Value可以标注在字段上面,可以将外部配置文件中的数据,比如可以...

Intellij IDEA 美化指南

经常有人问我,你的 IDEA 配色哪里搞的,我会告诉他我自己改的。作为生产力工具,不但要顺手而且更要顺眼。这样才能快乐编码,甚至降低 BUG 率。上次分享了一些 IDEA 有用的插件,反...

【相亲】96年程序员小哥第一次相亲,还没开始就结束了

颜值有点高,条件有点好

太厉害了,终于有人能把TCP/IP 协议讲的明明白白了

一图看完本文 一、 计算机网络体系结构分层 计算机网络体系结构分层 计算机网络体系结构分层 不难看出,TCP/IP 与 OSI 在分层模块上稍有区别。OSI 参考模型注重“通信协议必要的功能是什么”,而 TCP/IP 则更强调“在计算机上实现协议应该开发哪种程序”。 二、 TCP/IP 基础 1. TCP/IP 的具体含义 从字面意义上讲,有人可能会认为...

腾讯面试题: 百度搜索为什么那么快?

我还记得去年面腾讯时,面试官最后一个问题是:百度/google的搜索为什么那么快? 这个问题我懵了,我从来没想过,搜素引擎的原理是什么 然后我回答:百度爬取了各个网站的信息,然后进行排序,当输入关键词的时候进行文档比对……巴拉巴拉 面试官:这不是我想要的答案 我内心 这个问题我一直耿耿于怀,终于今天,我把他写出来,以后再问,我直接把这篇文章甩给他!!! 两个字:倒排,将贯穿整篇文章,也是面试官...

相关热词 c# 按行txt c#怎么扫条形码 c#打包html c# 实现刷新数据 c# 两个自定义控件重叠 c#浮点类型计算 c#.net 中文乱码 c# 时间排序 c# 必备书籍 c#异步网络通信
立即提问