oozie调度java程序kerberos认证失败 5C

大神们好:
我写了一个java程序,读取hdfs上实时文件然后解析存入hbase,程序放在hadoop集群上,然后使用oozie来定时调度这个程序,刚开始不是用kerberos安全认证,一切ok。
后来根据需要配置上kerberos安全验证了,就挂掉了。报错如下:


2018-06-25 12:15:02,551 WARN [SimpleAsyncTaskExecutor-1] org.apache.hadoop.hbase.ipc.RpcClientImpl: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
2018-06-25 12:15:02,551 FATAL [SimpleAsyncTaskExecutor-1] org.apache.hadoop.hbase.ipc.RpcClientImpl: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
    at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:618)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:163)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:744)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:741)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:741)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:907)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:874)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:34070)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1594)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1411)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1211)
    at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:988)
    at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:905)
    at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$100(AsyncProcess.java:615)
    at org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:597)
    at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:974)
    at com.ailk.xdrloader.batch.writer.EnhanceFileItemWriter.writeHbaseByApi(EnhanceFileItemWriter.java:377)
    at com.ailk.xdrloader.batch.writer.EnhanceFileItemWriter.writeToHbase(EnhanceFileItemWriter.java:309)
    at com.ailk.xdrloader.batch.writer.EnhanceFileItemWriter.write(EnhanceFileItemWriter.java:103)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
    at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:133)
    at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:121)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
    at com.sun.proxy.$Proxy32.write(Unknown Source)
    at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:175)
    at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:151)
    at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:274)
    at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:199)
    at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:75)
    at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:406)
    at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:330)
    at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:133)
    at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:271)
    at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:81)
    at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:374)
    at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215)
    at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:144)
    at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:257)
    at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:200)
    at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:139)
    at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:136)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottlingRunnable.run(SimpleAsyncTaskExecutor.java:268)
    at java.lang.Thread.run(Thread.java:748)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
    at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
    at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
    at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
    at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
    at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
    at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
    ... 58 more

刚开始我以为是keytab或者krb5.conf 有问题,于是我把程序本地执行,又是一切ok的,就是通过oozie调度就有问题。请问大家,oozie调用,是有是需要特殊处理的地方么

login kerberos的代码如下:

 public static Configuration getConfInstance() {
        if (conf == null) {
            synchronized (HbaseUtil.class) {
                if (conf != null)
                    return conf;
                LOG.info("krb5 file:" + krb5Conf + "\tkeytab file:" + keyTab + "\tprincipal:" + principal);
                System.setProperty("java.security.krb5.conf", krb5Conf);

                conf = HBaseConfiguration.create();
                conf.addResource("core-site.xml");
                conf.addResource("hdfs-site.xml");
                conf.set("keytab.file", keyTab);
                conf.set("kerberos.principal", principal);
                try {
                    UserGroupInformation.setConfiguration(conf);
                    UserGroupInformation.loginUserFromKeytab(principal, keyTab);
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
        return conf;
    }

1个回答

public static Configuration getConfInstance() {
if (conf == null) {
synchronized (HbaseUtil.class) {
if (conf != null)
return conf;
LOG.info("krb5 file:" + krb5Conf + "\tkeytab file:" + keyTab + "\tprincipal:" + principal);
System.setProperty("java.security.krb5.conf", krb5Conf);

            conf = HBaseConfiguration.create();
            conf.addResource("core-site.xml");
            conf.addResource("hdfs-site.xml");
            conf.set("keytab.file", keyTab);
            conf.set("kerberos.principal", principal);
            try {
                UserGroupInformation.setConfiguration(conf);
                UserGroupInformation.loginUserFromKeytab(principal, keyTab);
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }
    return conf;
}
yeasxy
yeasxy 哥们,你这个和我上面的代码有啥不同啊。。。
大约 2 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
开启kerberos,oozie执行作业失败

安全模式下,开启kerberos认证,在oozie命令行提交yarn作业执行失败 Client cannot authenticate via [TOkEN,KERBEROS ],Host details:localhost is "hst01";destination Host is "hst03":9000;

java使用Kerberos一段时间后过期了,怎么办?

我们的hadoop集群有Kerberos,然后我们用java访问了,用的keytab和krb5.conf的文件访问的 但是项目运行的好好的,过了24小时开始报错,无法访问集群上的hive和hdfs,提示票据不可用 请问该怎么办?具体怎么操作? ``` javax.security.sasl.SaslException: GSS initiate failed at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:196) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) at org.apache.commons.pool.impl.GenericObjectPool.addObject(GenericObjectPool.java:1691) at org.apache.commons.pool.impl.GenericObjectPool.ensureMinIdle(GenericObjectPool.java:1648) at org.apache.commons.pool.impl.GenericObjectPool.access$700(GenericObjectPool.java:192) at org.apache.commons.pool.impl.GenericObjectPool$Evictor.run(GenericObjectPool.java:1784) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 20 common frames omitted ```

datax部署于hadoop集群并使用oozie调度???

如题,datax为单机应用,有没有大神尝试将其部署在hadoop集群上然后使用oozie调度任务提交至集群?求大神指点。。。

oozie定时循环调度spark任务

oozie定时循环调度spark任务,5分钟一次,但是如果spark任务5分钟没有跑完, 下一次spark任务会读到同样的文件内容再跑吗? 问题即 当次任务执行的时间如果大于循环的时间,oozie会不会继续下一次循环,还是会一直等到那一次spark运行完再去执行呢?

spark带有kerberos认证,client方式提交作业没有问题,cluster方式报错

求助:spark带有kerberos认证,client方式提交作业没有问题,cluster方式报错,希望大家积极给出建议,只要解决了就给C 集群环境: spark版本:2.4.0-cdh6.3.2 spark应用代码:依赖版本跟cdh环境全部一致 命令如下: ``` spark-submit --class cn.com.test.sparkReadOns --keytab spark.keytab --principal spark/hadoop-namenode2 --master yarn --deploy-mode cluster .... ``` 报错日志: ``` Exception in thread "main" java.lang.ExceptionInInitializerError at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105) at org.apache.spark.deploy.security.HiveDelegationTokenProvider.hiveConf(HiveDelegationTokenProvider.scala:49) at org.apache.spark.deploy.security.HiveDelegationTokenProvider.delegationTokensRequired(HiveDelegationTokenProvider.scala:72) at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:131) at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:130) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at org.apache.spark.deploy.security.HadoopDelegationTokenManager.obtainDelegationTokens(HadoopDelegationTokenManager.scala:130) at org.apache.spark.deploy.yarn.security.YARNHadoopDelegationTokenManager.obtainDelegationTokens(YARNHadoopDelegationTokenManager.scala:59) at org.apache.spark.deploy.yarn.Client.setupSecurityToken(Client.scala:310) at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:1014) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179) at org.apache.spark.deploy.yarn.Client.run(Client.scala:1135) at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1527) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 3.0.0-cdh6.3.2 at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:174) at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:139) at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:100) at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:368) ... 26 more ```

oozie启动状态成功,可是运行简单的shell脚本报错

这是报的错: 2018-10-18 15:10:17,363 INFO ActionStartXCommand:520 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Next Retry, Attempt Number [3] in [10,000] milliseconds 2018-10-18 15:10:27,408 INFO ActionStartXCommand:520 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[????] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Start action [0000018-181018103446731-oozie-hado-W@shell-b6f7] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2018-10-18 15:10:27,534 WARN ActionStartXCommand:523 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Error starting action [shell-b6f7]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Unknown rpc kind in rpc headerRPC_WRITABLE] org.apache.oozie.action.ActionExecutorException: JA009: Unknown rpc kind in rpc headerRPC_WRITABLE at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:437) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1244) at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1411) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) at org.apache.oozie.command.XCommand.call(XCommand.java:286) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcServerException): Unknown rpc kind in rpc headerRPC_WRITABLE at org.apache.hadoop.ipc.Client.call(Client.java:1504) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:246) at org.apache.hadoop.mapred.$Proxy33.getDelegationToken(Unknown Source) at org.apache.hadoop.mapred.JobClient.getDelegationToken(JobClient.java:2153) at org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:502) at org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:1454) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1192) ... 9 more 2018-10-18 15:10:27,538 WARN ActionStartXCommand:523 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Exceeded max retry count [3]. Suspending Job 2018-10-18 15:10:27,538 WARN ActionStartXCommand:523 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Suspending Workflow Job id=0000018-181018103446731-oozie-hado-W 脚本内容: #! /bin/bash echo "hello" >> /home/hadoop/text.txt

在kerberos环境下使用spark2访问hive报错

2019-05-13 21:27:07,394 [main] WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql - Self-test query [select "DB_ID" from "DBS"] failed; direct SQL is disabled javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) NestedThrowablesStackTrace: java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source) at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.<init>(Unknown Source) at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1193) at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:345) at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:211) at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:633) at org.datanucleus.store.query.Query.executeQuery(Query.java:1844) at org.datanucleus.store.rdbms.query.SQLQuery.executeWithArray(SQLQuery.java:807) at org.datanucleus.store.query.Query.execute(Query.java:1715) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:371) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) Caused by: ERROR 42X05: Table/View 'DBS' does not exist. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.FromList.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bind(Unknown Source) at org.apache.derby.impl.sql.compile.CursorNode.bindStatement(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source) at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source) ... 113 more 没加kerberos认证,然后报找不到库,我猜是权限不够,然后加了kerberos,又报java.lang.reflect.InvocationTargetException和Caused by:java.lang.NullPointerException

CDH安装的oozie,oozie在运行任务时出现如下问题

ActionExecutorException: JA017: Could not lookup launched hadoop Job ID

使用hue集成oozie 运行 shell 脚本 完成度为95% 状态持续为running

查看MR日志,error信息: ![图片说明](https://img-ask.csdn.net/upload/201810/23/1540277077_955455.png) hadoop:8088 上面状态一直为RUNNING ,内存有剩余

OOZIE与Tez执行兼容问题

最近搭建大数据框架,离线计算部分想用OOZIE作为任务调度。 我的大数据框架hadoop的MapReduce执行在tez上。 在执行OOZIE自带的example时候,如果在hadoop的任务用原生的MapReduce执行,也就是当配置文件mapred-site.xml中mapreduce.framework.name设定为yarn时候,OOZIE执行例子都没有问题,都能成功。 但是当把任务放在tez上执行时,也就是把mapred-site.xml中mapreduce.framework.name设定为yarn-tez时候,执行hadoop的任务,比如跑个原生的wordcount例子,会显示执行引擎是tez并执行成功。如果执行OOZIE例子,就会失败, 出现错误:JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. 把mapreduce.framework.name改回yarn后,再执行OOZIE例子,还是会成功。 网上有用oozie处理hive作业时使用Tez引擎的例子,但是请问所有的任务,不仅仅是 Hive任务用Tez引擎,应该如何做到OOZIE所有任务都和Tez兼容???或者是OOZIE 所有任务这种情况无法和Tze引擎兼容?如果不能兼容请说明原因·······万分感谢!!

oozie shell action 执行hql 执行的mr是本地模式

各位大神,不知道你们是否有使用过oozie调用shell action ,shell中执行的是hive -e “$hql” ,遇到了一个问题 ,hive生成的mr都是本地的job。不值到哪位大神遇到过。帮忙给点参考意见啊。

oozie调用shell中的问题

大家好,我使用oozie调用shell 。调用简单的shell是已经成功了,比如在shell中 创建hdfs目录,但是我现在shell中,需要调用一个用scala写的jar包,内容如下: nohup /apps/scala-2.10.5/bin/scala /opt/oozie-work/jars/STBLogProduct7.jar hdfs://bigdata1:8020 /user/root/mytest/stblog/ >> /opt/oozie-work/logs/producelog.log 2>&1 &这样调用时不成功的。 我的问题是当使用oozie调用shell,shell又调用别的jar,这个jar包应该放到hdfs还是本地目录,shell的执行是在本地文件系统吗?那么怎么找得到我的jar包呢?

oozie配置任务,任务可以执行成功,但是workflow显示的是kill

oozie配置任务,任务可以执行成功,但是workflow显示的是kill,看了一下日志,报错如下: ![图片说明](https://img-ask.csdn.net/upload/201908/08/1565255997_632507.png)![图片说明](https://img-ask.csdn.net/upload/201908/08/1565256009_163062.png) 我看网上说换mysql驱动包,我已经尝试了,发现不是这个问题导致的。 我用的是hue,然后hdp的版本是2.6。

使用Oozie执行coordinator定时任务,如果start设得过早,如何避免补跑?

``` <coordinator-app name="zztj-coord=increment-sqoop" frequency="${coord:minutes(5)}" start="2017-08-01T16:00+0800" end="2017-08-01T18:00+0800" timezone="UTC" xmlns="uri:oozie:coordinator:0.2"> <action> <workflow> <app-path>${workflowAppUri}</app-path> <configuration> <property> <name>jobTracker</name> <value>${jobTracker}</value> </property> <property> <name>nameNode</name> <value>${nameNode}</value> </property> <property> <name>queueName</name> <value>${queueName}</value> </property> </configuration> </workflow> </action> </coordinator-app> ``` 比如上面的是我coordinator.xml的配置内容 我在2017-08-01的 17:00启动这个Oozie任务 我期望的是 17:00执行一次 17:05执行一次 17:10执行一次 ...... 然而实际上不是的,我的任务一启动,它会先自动连续着执行12次 分别为 补跑16:00一次 补跑16:05一次 补跑16:10一次 ... 全部补跑完后,才进入正轨去按时间定时执行 请问如何避免? 谢谢

oozie调用sqoop import任务,出现异常

oozie4.3.1 sqoop1.4.7 workflow.xml ``` <workflow-app xmlns="uri:oozie:workflow:0.5" name="sqoop-import-wf"> <start to="sqoop-node"/> <action name="sqoop-node"> <sqoop xmlns="uri:oozie:sqoop-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> <property> <name>oozie.sqoop.log.level</name> <value>WARN</value> </property> </configuration> <command>import --connect jdbc:mysql://study:3306/test --username root --password 123456 --table terminal_info --where "update_time between 20180615230000 and 20180616225959" --target-dir "/user/hive/warehouse/temp_terminal_info" --append --fields-terminated-by "," --lines-terminated-by "\n" --num-mappers 1 --direct</command> </sqoop> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> ``` oozie job log 2018-06-19 10:12:10,615 WARN SqoopActionExecutor:523 - SERVER[study] USER[root] GROUP[-] TOKEN[] APP[sqoop-import-wf] JOB[0000017-180619092621453-oozie-root-W] ACTION[0000017-180619092621453-oozie-root-W@sqoop-node] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] mapreduce job 执行成功了 ``` Job Name: oozie:launcher:T=sqoop:W=sqoop-import-wf:A=sqoop-node:ID=0000017-180619092621453-oozie-root-W User Name: root Queue: root.root State: SUCCEEDED Uberized: true Submitted: Tue Jun 19 10:11:59 CST 2018 Started: Tue Jun 19 10:12:07 CST 2018 Finished: Tue Jun 19 10:12:08 CST 2018 Elapsed: 1sec Diagnostics: Average Map Time 1sec ``` 但是oozie 出现了 Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] 也没有其他的日志,请问有碰到这样问题的朋友吗 或者怎么去排查这个问题 求大神帮忙

oozie调用sqoop import任务,一直处于running状态

sqoop import命令在窗口直接执行时没有问题的,但是通过oozie去调用就出现这种问题 oozie调用sqoop list-databases也是可以成功的 有没有大神给分析一下是哪些地方的问题,内存不足还是什么问题 workflow.xml <workflow-app xmlns="uri:oozie:workflow:0.5" name="sqoop-import-wf"> <start to="sqoop-node"/> <action name="sqoop-node"> <sqoop xmlns="uri:oozie:sqoop-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <command>import --connect jdbc:mysql://10.148.1.100:3306/test --username root --password 123456 --table person --target-dir /user/root/testSqoop/output --fields-terminated-by "," --num-mappers 1 --direct</command> </sqoop> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> ``` ``` oozie console ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947330_88228.png) hadoop applacation ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947357_833279.png) ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947683_649469.png) 疑问为什么会起两个 MAPREDUCE任务 虚拟机内存情况 ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947717_507668.png) 日志信息 2018-06-14 11:08:20,625 [uber-SubtaskRunner] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration. 2018-06-14 11:08:20,649 [uber-SubtaskRunner] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.6 2018-06-14 11:08:20,664 [uber-SubtaskRunner] WARN org.apache.sqoop.tool.BaseSqoopTool - Setting your password on the command-line is insecure. Consider using -P instead. 2018-06-14 11:08:20,676 [uber-SubtaskRunner] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration. 2018-06-14 11:08:20,779 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.MySQLManager - Preparing to use a MySQL streaming resultset. 2018-06-14 11:08:20,784 [uber-SubtaskRunner] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code generation 2018-06-14 11:08:21,235 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `person` AS t LIMIT 1 2018-06-14 11:08:21,261 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `person` AS t LIMIT 1 2018-06-14 11:08:21,266 [uber-SubtaskRunner] INFO org.apache.sqoop.orm.CompilationManager - HADOOP_MAPRED_HOME is /opt/hadoop-2.6.0 2018-06-14 11:08:23,470 [uber-SubtaskRunner] INFO org.apache.sqoop.orm.CompilationManager - Writing jar file: /tmp/sqoop-root/compile/8750ac72c9db5da23fa902b0a16d1957/person.jar 2018-06-14 11:08:23,485 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.DirectMySQLManager - Beginning mysqldump fast path import 2018-06-14 11:08:23,485 [uber-SubtaskRunner] INFO org.apache.sqoop.mapreduce.ImportJobBase - Beginning import of person 2018-06-14 11:08:23,486 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2018-06-14 11:08:23,491 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar 2018-06-14 11:08:23,505 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2018-06-14 11:08:23,509 [uber-SubtaskRunner] WARN org.apache.sqoop.mapreduce.JobBase - SQOOP_HOME is unset. May not be able to find all job dependencies. 2018-06-14 11:08:23,559 [uber-SubtaskRunner] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at study/10.148.1.100:8032 2018-06-14 11:08:23,928 [uber-SubtaskRunner] INFO org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation 2018-06-14 11:08:23,996 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1 2018-06-14 11:08:24,051 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1528942657382_0005 2018-06-14 11:08:24,051 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 4 cluster_timestamp: 1528942657382 } attemptId: 1 } keyId: 589196118) 2018-06-14 11:08:24,053 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - Kind: RM_DELEGATION_TOKEN, Service: 10.148.1.100:8032, Ident: (owner=root, renewer=oozie mr token, realUser=root, issueDate=1528945693299, maxDate=1529550493299, sequenceNumber=3, masterKeyId=2) 2018-06-14 11:08:24,211 [uber-SubtaskRunner] WARN org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://study:9000/user/root/testSqoop/lib/sqoop-1.4.6-hadoop200.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://study:9000/tmp/hadoop-yarn/staging/root/.staging/job_1528942657382_0005/libjars/sqoop-1.4.6-hadoop200.jar This will be an error in Hadoop 2.0 2018-06-14 11:08:24,213 [uber-SubtaskRunner] WARN org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://study:9000/user/root/testSqoop/lib/mysql-connector-java.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://study:9000/tmp/hadoop-yarn/staging/root/.staging/job_1528942657382_0005/libjars/mysql-connector-java.jar This will be an error in Hadoop 2.0 2018-06-14 11:08:24,265 [uber-SubtaskRunner] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1528942657382_0005 2018-06-14 11:08:24,301 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://study:8088/proxy/application_1528942657382_0005/ 2018-06-14 11:08:24,301 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://study:8088/proxy/application_1528942657382_0005/ 2018-06-14 11:08:24,302 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1528942657382_0005 2018-06-14 11:08:24,302 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1528942657382_0005 2018-06-14 11:08:25,976 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 2018-06-14 11:08:25,976 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 Heart beat 2018-06-14 11:08:56,045 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 2018-06-14 11:08:56,045 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 Heart beat 2018-06-14 11:09:26,082 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 2018-06-14 11:09:26,082 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 Heart beat

oozie安装,oozie-docs报错

[INFO] Unable to load parent project from a relative path: 1 problem was encountered while building the effective model [FATAL] Non-readable POM /home/oozie/oozie-4.2.0/../pom.xml: /home/oozie/oozie-4.2.0/../pom.xml (No such file or directory) @ for project at /home/oozie/oozie-4.2.0/../pom.xml for project at /home/oozie/oozie-4.2.0/../pom.xml [INFO] Parent project loaded from repository. [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Oozie Main ................................. SUCCESS [ 1.418 s] [INFO] Apache Oozie Hadoop Utils ......................... SUCCESS [ 0.931 s] [INFO] Apache Oozie Hadoop Distcp hadoop-1-4.2.0 ......... SUCCESS [ 0.114 s] [INFO] Apache Oozie Hadoop Auth hadoop-1-4.2.0 ........... SUCCESS [ 0.151 s] [INFO] Apache Oozie Hadoop Libs .......................... SUCCESS [ 0.022 s] [INFO] Apache Oozie Client ............................... SUCCESS [ 15.918 s] [INFO] Apache Oozie Share Lib Oozie ...................... SUCCESS [ 1.950 s] [INFO] Apache Oozie Share Lib HCatalog ................... SUCCESS [ 7.325 s] [INFO] Apache Oozie Share Lib Distcp ..................... SUCCESS [ 0.618 s] [INFO] Apache Oozie Core ................................. SUCCESS [ 36.660 s] [INFO] Apache Oozie Share Lib Streaming .................. SUCCESS [ 3.166 s] [INFO] Apache Oozie Share Lib Pig ........................ SUCCESS [ 4.054 s] [INFO] Apache Oozie Share Lib Hive ....................... SUCCESS [ 8.105 s] [INFO] Apache Oozie Share Lib Hive 2 ..................... SUCCESS [ 3.258 s] [INFO] Apache Oozie Share Lib Sqoop ...................... SUCCESS [ 1.792 s] [INFO] Apache Oozie Examples ............................. SUCCESS [ 3.329 s] [INFO] Apache Oozie Share Lib Spark ...................... SUCCESS [ 3.642 s] [INFO] Apache Oozie Share Lib ............................ SUCCESS [ 12.581 s] [INFO] Apache Oozie Docs ................................. FAILURE [ 16.097 s] [INFO] Apache Oozie WebApp ............................... SKIPPED [INFO] Apache Oozie Tools ................................ SKIPPED [INFO] Apache Oozie MiniOozie ............................ SKIPPED [INFO] Apache Oozie Distro ............................... SKIPPED [INFO] Apache Oozie ZooKeeper Security Tests ............. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:02 min [INFO] Finished at: 2015-11-30T17:12:19+08:00 [INFO] Final Memory: 192M/2008M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:2.0-beta-6:site (default) on project oozie-docs: Error parsing site descriptor: Unrecognised tag: 'html' (position: START_TAG seen <html>... @1:6) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :oozie-docs ERROR, Oozie distro creation failed

oozie 提交examples中的map-reduce的异常

hadoop安装成功,oozie安装成功,页面显示正常 在提交工作map-reduce时产生异常, 日志信息: 2015-08-21 19:20:44,442 WARN V1JobsServlet:542 - USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] URL[POST http://master:11000/oozie/v1/jobs?action=start] error[E0902], E0902: Exception occured: [Call to master/192.168.52.129:9000 failed on local exception: java.io.IOException: Broken pipe] org.apache.oozie.servlet.XServletException: E0902: Exception occured: [Call to master/192.168.52.129:9000 failed on local exception: java.io.IOException: Broken pipe] at org.apache.oozie.servlet.BaseJobServlet.checkAuthorizationForApp(BaseJobServlet.java:201) at org.apache.oozie.servlet.BaseJobsServlet.doPost(BaseJobsServlet.java:97) 求大神解决啊

hue上oozie配置spark2,执行任务的时候Application Type不是Spark而是MapReduce

如题,图片如下: ![图片说明](https://img-ask.csdn.net/upload/201908/03/1564813792_360531.png)

MySQL 8.0.19安装教程(windows 64位)

话不多说直接开干 目录 1-先去官网下载点击的MySQL的下载​ 2-配置初始化的my.ini文件的文件 3-初始化MySQL 4-安装MySQL服务 + 启动MySQL 服务 5-连接MySQL + 修改密码 先去官网下载点击的MySQL的下载 下载完成后解压 解压完是这个样子 配置初始化的my.ini文件的文件 ...

Python+OpenCV计算机视觉

Python+OpenCV计算机视觉系统全面的介绍。

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

navicat(内含激活码)

navicat支持mysql的可视化操作,内涵激活码,不用再忍受弹框的痛苦。

HTML期末大作业

这是我自己做的HTML期末大作业,花了很多时间,稍加修改就可以作为自己的作业了,而且也可以作为学习参考

150讲轻松搞定Python网络爬虫

【为什么学爬虫?】 &nbsp; &nbsp; &nbsp; &nbsp;1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到! &nbsp; &nbsp; &nbsp; &nbsp;2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是: 网络请求:模拟浏览器的行为从网上抓取数据。 数据解析:将请求下来的数据进行过滤,提取我们想要的数据。 数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。 那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是: 爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。 Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。 通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 &nbsp; 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求! 【课程服务】 专属付费社群+每周三讨论会+1v1答疑

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

基于STM32的电子时钟设计

时钟功能 还有闹钟功能,温湿度功能,整点报时功能 你值得拥有

学生成绩管理系统(PHP + MYSQL)

做的是数据库课程设计,使用的php + MySQL,本来是黄金搭配也就没啥说的,推荐使用wamp服务器,里面有详细的使用说明,带有界面的啊!呵呵 不行的话,可以给我留言!

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

程序员的兼职技能课

获取讲师答疑方式: 在付费视频第一节(触摸命令_ALL)片头有二维码及加群流程介绍 限时福利 原价99元,今日仅需39元!购课添加小助手(微信号:itxy41)按提示还可领取价值800元的编程大礼包! 讲师介绍: 苏奕嘉&nbsp;前阿里UC项目工程师 脚本开发平台官方认证满级(六级)开发者。 我将如何教会你通过【定制脚本】赚到你人生的第一桶金? 零基础程序定制脚本开发课程,是完全针对零脚本开发经验的小白而设计,课程内容共分为3大阶段: ①前期将带你掌握Q开发语言和界面交互开发能力; ②中期通过实战来制作有具体需求的定制脚本; ③后期将解锁脚本的更高阶玩法,打通任督二脉; ④应用定制脚本合法赚取额外收入的完整经验分享,带你通过程序定制脚本开发这项副业,赚取到你的第一桶金!

实用主义学Python(小白也容易上手的Python实用案例)

原价169,限时立减100元! 系统掌握Python核心语法16点,轻松应对工作中80%以上的Python使用场景! 69元=72讲+源码+社群答疑+讲师社群分享会&nbsp; 【哪些人适合学习这门课程?】 1)大学生,平时只学习了Python理论,并未接触Python实战问题; 2)对Python实用技能掌握薄弱的人,自动化、爬虫、数据分析能让你快速提高工作效率; 3)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; 4)想修炼更好的编程内功,优秀的工程师肯定不能只会一门语言,Python语言功能强大、使用高效、简单易学。 【超实用技能】 从零开始 自动生成工作周报 职场升级 豆瓣电影数据爬取 实用案例 奥运冠军数据分析 自动化办公:通过Python自动化分析Excel数据并自动操作Word文档,最终获得一份基于Excel表格的数据分析报告。 豆瓣电影爬虫:通过Python自动爬取豆瓣电影信息并将电影图片保存到本地。 奥运会数据分析实战 简介:通过Python分析120年间奥运会的数据,从不同角度入手分析,从而得出一些有趣的结论。 【超人气老师】 二两 中国人工智能协会高级会员 生成对抗神经网络研究者 《深入浅出生成对抗网络:原理剖析与TensorFlow实现》一书作者 阿里云大学云学院导师 前大型游戏公司后端工程师 【超丰富实用案例】 0)图片背景去除案例 1)自动生成工作周报案例 2)豆瓣电影数据爬取案例 3)奥运会数据分析案例 4)自动处理邮件案例 5)github信息爬取/更新提醒案例 6)B站百大UP信息爬取与分析案例 7)构建自己的论文网站案例

Java8零基础入门视频教程

这门课程基于主流的java8平台,由浅入深的详细讲解了java SE的开发技术,可以使java方向的入门学员,快速扎实的掌握java开发技术!

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

零基础学C#编程—C#从小白到大咖

本课程从初学者角度出发,提供了C#从入门到成为程序开发高手所需要掌握的各方面知识和技术。 【课程特点】 1 由浅入深,编排合理; 2 视频讲解,精彩详尽; 3 丰富实例,轻松易学; 4 每章总结配有难点解析文档。 15大章节,228课时,1756分钟与你一同进步!

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

多功能数字钟.zip

利用数字电子计数知识设计并制作的数字电子钟(含multisim仿真),该数字钟具有显示星期、24小时制时间、闹铃、整点报时、时间校准功能

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

想学好JAVA必须要报两万的培训班吗? Java大神勿入 如果你: 零基础想学JAVA却不知道从何入手 看了一堆书和视频却还是连JAVA的环境都搭建不起来 囊中羞涩面对两万起的JAVA培训班不忍直视 在职没有每天大块的时间专门学习JAVA 那么恭喜你找到组织了,在这里有: 1. 一群志同道合立志学好JAVA的同学一起学习讨论JAVA 2. 灵活机动的学习时间完成特定学习任务+每日编程实战练习 3. 热心助人的助教和讲师及时帮你解决问题,不按时完成作业小心助教老师的家访哦 上一张图看看前辈的感悟: &nbsp; &nbsp; 大家一定迫不及待想知道什么是极简JAVA学习营了吧,下面就来给大家说道说道: 什么是极简JAVA学习营? 1. 针对Java小白或者初级Java学习者; 2. 利用9天时间,每天1个小时时间; 3.通过 每日作业 / 组队PK / 助教答疑 / 实战编程 / 项目答辩 / 社群讨论 / 趣味知识抢答等方式让学员爱上学习编程 , 最终实现能独立开发一个基于控制台的‘库存管理系统’ 的学习模式 极简JAVA学习营是怎么学习的? &nbsp; 如何报名? 只要购买了极简JAVA一:JAVA入门就算报名成功! &nbsp;本期为第四期极简JAVA学习营,我们来看看往期学员的学习状态: 作业看这里~ &nbsp; 助教的作业报告是不是很专业 不交作业打屁屁 助教答疑是不是很用心 &nbsp; 有奖抢答大家玩的很嗨啊 &nbsp; &nbsp; 项目答辩终于开始啦 &nbsp; 优秀者的获奖感言 &nbsp; 这是答辩项目的效果 &nbsp; &nbsp; 这么细致的服务,这么好的氛围,这样的学习效果,需要多少钱呢? 不要1999,不要199,不要99,只要9.9 是的你没听错,只要9.9以上所有就都属于你了 如果你: 1、&nbsp;想学JAVA没有基础 2、&nbsp;想学JAVA没有整块的时间 3、&nbsp;想学JAVA没有足够的预算 还等什么?赶紧报名吧,抓紧抢位,本期只招300人,错过只有等时间待定的下一期了 &nbsp; 报名请加小助手微信:eduxy-1 &nbsp; &nbsp;

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

机器学习实战系列套餐(必备基础+经典算法+案例实战)

机器学习实战系列套餐以实战为出发点,帮助同学们快速掌握机器学习领域必备经典算法原理并结合Python工具包进行实战应用。建议学习顺序:1.Python必备工具包:掌握实战工具 2.机器学习算法与实战应用:数学原理与应用方法都是必备技能 3.数据挖掘实战:通过真实数据集进行项目实战。按照下列课程顺序学习即可! 课程风格通俗易懂,用最接地气的方式带领大家轻松进军机器学习!提供所有课程代码,PPT与实战数据,有任何问题欢迎随时与我讨论。

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

几率大的Redis面试题(含答案)

本文的面试题如下: Redis 持久化机制 缓存雪崩、缓存穿透、缓存预热、缓存更新、缓存降级等问题 热点数据和冷数据是什么 Memcache与Redis的区别都有哪些? 单线程的redis为什么这么快 redis的数据类型,以及每种数据类型的使用场景,Redis 内部结构 redis的过期策略以及内存淘汰机制【~】 Redis 为什么是单线程的,优点 如何解决redis的并发竞争key问题 Red...

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

【数据结构与算法综合实验】欢乐连连看(C++ & MFC)案例

这是武汉理工大学计算机学院数据结构与算法综合实验课程的第三次项目:欢乐连连看(C++ & MFC)迭代开发代码。运行环境:VS2017。已经实现功能:开始游戏、消子、判断胜负、提示、重排、计时、帮助。

YOLOv3目标检测实战:训练自己的数据集

YOLOv3是一种基于深度学习的端到端实时目标检测方法,以速度快见长。本课程将手把手地教大家使用labelImg标注和使用YOLOv3训练自己的数据集。课程分为三个小项目:足球目标检测(单目标检测)、梅西目标检测(单目标检测)、足球和梅西同时目标检测(两目标检测)。 本课程的YOLOv3使用Darknet,在Ubuntu系统上做项目演示。包括:安装Darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 Darknet是使用C语言实现的轻型开源深度学习框架,依赖少,可移植性好,值得深入探究。 除本课程《YOLOv3目标检测实战:训练自己的数据集》外,本人推出了有关YOLOv3目标检测的系列课程,请持续关注该系列的其它课程视频,包括: 《YOLOv3目标检测实战:交通标志识别》 《YOLOv3目标检测:原理与源码解析》 《YOLOv3目标检测:网络模型改进方法》 敬请关注并选择学习!

u-boot-2015.07.tar.bz2

uboot-2015-07最新代码,喜欢的朋友请拿去

相关热词 c# 局部 截图 页面 c#实现简单的文件管理器 c# where c# 取文件夹路径 c# 对比 当天 c# fir 滤波器 c# 和站 队列 c# txt 去空格 c#移除其他类事件 c# 自动截屏
立即提问