oozie 做sqoop导入报错

各位大神,在使用oozie 做sqoop导入时候报了一个错误,请问下有没有遇到的,有没有好的意见 Log Length: 5997
log4j:ERROR Could not find value for key log4j.appender.CLA log4j:ERROR Could not instantiate appender named "CLA". log4j:WARN No appenders could be found for logger (org.apache.hadoop.yarn.client.RMProxy). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 注: /tmp/sqoop-yarn/compile/9f3c81b0062cec6973184f1f95c215f9/JC_AJXX.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 org.kitesdk.data.DatasetNotFoundException: Unknown dataset URI pattern: dataset:hive:/test/JC_AJXX all_scheme are [URIPattern{pattern=file:/*path/:namespace/:dataset?absolute=true}, URIPattern{pattern=file:*path/:namespace/:dataset}, URIPattern{pattern=hdfs:/*path/:namespace/:dataset?absolute=true}, URIPattern{pattern=hdfs:*path/:namespace/:dataset}, URIPattern{pattern=webhdfs:/*path/:namespace/:dataset?absolute=true}] Check that JARs for hive datasets are on the classpath at org.kitesdk.data.spi.Registration.lookupDatasetUri(Registration.java:108) at org.kitesdk.data.Datasets.create(Datasets.java:228) at org.kitesdk.data.Datasets.create(Datasets.java:307) at org.apache.sqoop.mapreduce.ParquetJob.createDataset(ParquetJob.java:107) at org.apache.sqoop.mapreduce.ParquetJob.configureImportJob(ParquetJob.java:89) at org.apache.sqoop.mapreduce.DataDrivenImportJob.configureMapper(DataDrivenImportJob.java:106) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:260) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:668) at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:196) at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:176) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:46) at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:228) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:370) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:295) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:181) at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:224) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Intercepting System.exit(1) Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] 十二月 04, 2015 2:35:06 下午 com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get 警告: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information. 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 信息: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 信息: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class 十二月 04, 2015 2:35:07 下午 com.sun.jersey.server.impl.application.WebApplicationImpl _initiate 信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 信息: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" 十二月 04, 2015 2:35:07 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 信息: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
oozie调用sqoop import任务,出现异常
oozie4.3.1 sqoop1.4.7 workflow.xml ``` <workflow-app xmlns="uri:oozie:workflow:0.5" name="sqoop-import-wf"> <start to="sqoop-node"/> <action name="sqoop-node"> <sqoop xmlns="uri:oozie:sqoop-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> <property> <name>oozie.sqoop.log.level</name> <value>WARN</value> </property> </configuration> <command>import --connect jdbc:mysql://study:3306/test --username root --password 123456 --table terminal_info --where "update_time between 20180615230000 and 20180616225959" --target-dir "/user/hive/warehouse/temp_terminal_info" --append --fields-terminated-by "," --lines-terminated-by "\n" --num-mappers 1 --direct</command> </sqoop> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> ``` oozie job log 2018-06-19 10:12:10,615 WARN SqoopActionExecutor:523 - SERVER[study] USER[root] GROUP[-] TOKEN[] APP[sqoop-import-wf] JOB[0000017-180619092621453-oozie-root-W] ACTION[0000017-180619092621453-oozie-root-W@sqoop-node] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] mapreduce job 执行成功了 ``` Job Name: oozie:launcher:T=sqoop:W=sqoop-import-wf:A=sqoop-node:ID=0000017-180619092621453-oozie-root-W User Name: root Queue: root.root State: SUCCEEDED Uberized: true Submitted: Tue Jun 19 10:11:59 CST 2018 Started: Tue Jun 19 10:12:07 CST 2018 Finished: Tue Jun 19 10:12:08 CST 2018 Elapsed: 1sec Diagnostics: Average Map Time 1sec ``` 但是oozie 出现了 Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] 也没有其他的日志,请问有碰到这样问题的朋友吗 或者怎么去排查这个问题 求大神帮忙
oozie调用sqoop import任务,一直处于running状态
sqoop import命令在窗口直接执行时没有问题的,但是通过oozie去调用就出现这种问题 oozie调用sqoop list-databases也是可以成功的 有没有大神给分析一下是哪些地方的问题,内存不足还是什么问题 workflow.xml <workflow-app xmlns="uri:oozie:workflow:0.5" name="sqoop-import-wf"> <start to="sqoop-node"/> <action name="sqoop-node"> <sqoop xmlns="uri:oozie:sqoop-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <command>import --connect jdbc:mysql://10.148.1.100:3306/test --username root --password 123456 --table person --target-dir /user/root/testSqoop/output --fields-terminated-by "," --num-mappers 1 --direct</command> </sqoop> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> ``` ``` oozie console ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947330_88228.png) hadoop applacation ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947357_833279.png) ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947683_649469.png) 疑问为什么会起两个 MAPREDUCE任务 虚拟机内存情况 ![图片说明](https://img-ask.csdn.net/upload/201806/14/1528947717_507668.png) 日志信息 2018-06-14 11:08:20,625 [uber-SubtaskRunner] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration. 2018-06-14 11:08:20,649 [uber-SubtaskRunner] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.6 2018-06-14 11:08:20,664 [uber-SubtaskRunner] WARN org.apache.sqoop.tool.BaseSqoopTool - Setting your password on the command-line is insecure. Consider using -P instead. 2018-06-14 11:08:20,676 [uber-SubtaskRunner] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration. 2018-06-14 11:08:20,779 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.MySQLManager - Preparing to use a MySQL streaming resultset. 2018-06-14 11:08:20,784 [uber-SubtaskRunner] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code generation 2018-06-14 11:08:21,235 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `person` AS t LIMIT 1 2018-06-14 11:08:21,261 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `person` AS t LIMIT 1 2018-06-14 11:08:21,266 [uber-SubtaskRunner] INFO org.apache.sqoop.orm.CompilationManager - HADOOP_MAPRED_HOME is /opt/hadoop-2.6.0 2018-06-14 11:08:23,470 [uber-SubtaskRunner] INFO org.apache.sqoop.orm.CompilationManager - Writing jar file: /tmp/sqoop-root/compile/8750ac72c9db5da23fa902b0a16d1957/person.jar 2018-06-14 11:08:23,485 [uber-SubtaskRunner] INFO org.apache.sqoop.manager.DirectMySQLManager - Beginning mysqldump fast path import 2018-06-14 11:08:23,485 [uber-SubtaskRunner] INFO org.apache.sqoop.mapreduce.ImportJobBase - Beginning import of person 2018-06-14 11:08:23,486 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2018-06-14 11:08:23,491 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar 2018-06-14 11:08:23,505 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2018-06-14 11:08:23,509 [uber-SubtaskRunner] WARN org.apache.sqoop.mapreduce.JobBase - SQOOP_HOME is unset. May not be able to find all job dependencies. 2018-06-14 11:08:23,559 [uber-SubtaskRunner] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at study/10.148.1.100:8032 2018-06-14 11:08:23,928 [uber-SubtaskRunner] INFO org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation 2018-06-14 11:08:23,996 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1 2018-06-14 11:08:24,051 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1528942657382_0005 2018-06-14 11:08:24,051 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 4 cluster_timestamp: 1528942657382 } attemptId: 1 } keyId: 589196118) 2018-06-14 11:08:24,053 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.JobSubmitter - Kind: RM_DELEGATION_TOKEN, Service: 10.148.1.100:8032, Ident: (owner=root, renewer=oozie mr token, realUser=root, issueDate=1528945693299, maxDate=1529550493299, sequenceNumber=3, masterKeyId=2) 2018-06-14 11:08:24,211 [uber-SubtaskRunner] WARN org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://study:9000/user/root/testSqoop/lib/sqoop-1.4.6-hadoop200.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://study:9000/tmp/hadoop-yarn/staging/root/.staging/job_1528942657382_0005/libjars/sqoop-1.4.6-hadoop200.jar This will be an error in Hadoop 2.0 2018-06-14 11:08:24,213 [uber-SubtaskRunner] WARN org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://study:9000/user/root/testSqoop/lib/mysql-connector-java.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://study:9000/tmp/hadoop-yarn/staging/root/.staging/job_1528942657382_0005/libjars/mysql-connector-java.jar This will be an error in Hadoop 2.0 2018-06-14 11:08:24,265 [uber-SubtaskRunner] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1528942657382_0005 2018-06-14 11:08:24,301 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://study:8088/proxy/application_1528942657382_0005/ 2018-06-14 11:08:24,301 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://study:8088/proxy/application_1528942657382_0005/ 2018-06-14 11:08:24,302 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1528942657382_0005 2018-06-14 11:08:24,302 [uber-SubtaskRunner] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1528942657382_0005 2018-06-14 11:08:25,976 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 2018-06-14 11:08:25,976 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 Heart beat 2018-06-14 11:08:56,045 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 2018-06-14 11:08:56,045 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 Heart beat 2018-06-14 11:09:26,082 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 2018-06-14 11:09:26,082 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1528942657382_0004_m_000000_0 is : 1.0 Heart beat
oozie安装,oozie-docs报错
[INFO] Unable to load parent project from a relative path: 1 problem was encountered while building the effective model [FATAL] Non-readable POM /home/oozie/oozie-4.2.0/../pom.xml: /home/oozie/oozie-4.2.0/../pom.xml (No such file or directory) @ for project at /home/oozie/oozie-4.2.0/../pom.xml for project at /home/oozie/oozie-4.2.0/../pom.xml [INFO] Parent project loaded from repository. [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Oozie Main ................................. SUCCESS [ 1.418 s] [INFO] Apache Oozie Hadoop Utils ......................... SUCCESS [ 0.931 s] [INFO] Apache Oozie Hadoop Distcp hadoop-1-4.2.0 ......... SUCCESS [ 0.114 s] [INFO] Apache Oozie Hadoop Auth hadoop-1-4.2.0 ........... SUCCESS [ 0.151 s] [INFO] Apache Oozie Hadoop Libs .......................... SUCCESS [ 0.022 s] [INFO] Apache Oozie Client ............................... SUCCESS [ 15.918 s] [INFO] Apache Oozie Share Lib Oozie ...................... SUCCESS [ 1.950 s] [INFO] Apache Oozie Share Lib HCatalog ................... SUCCESS [ 7.325 s] [INFO] Apache Oozie Share Lib Distcp ..................... SUCCESS [ 0.618 s] [INFO] Apache Oozie Core ................................. SUCCESS [ 36.660 s] [INFO] Apache Oozie Share Lib Streaming .................. SUCCESS [ 3.166 s] [INFO] Apache Oozie Share Lib Pig ........................ SUCCESS [ 4.054 s] [INFO] Apache Oozie Share Lib Hive ....................... SUCCESS [ 8.105 s] [INFO] Apache Oozie Share Lib Hive 2 ..................... SUCCESS [ 3.258 s] [INFO] Apache Oozie Share Lib Sqoop ...................... SUCCESS [ 1.792 s] [INFO] Apache Oozie Examples ............................. SUCCESS [ 3.329 s] [INFO] Apache Oozie Share Lib Spark ...................... SUCCESS [ 3.642 s] [INFO] Apache Oozie Share Lib ............................ SUCCESS [ 12.581 s] [INFO] Apache Oozie Docs ................................. FAILURE [ 16.097 s] [INFO] Apache Oozie WebApp ............................... SKIPPED [INFO] Apache Oozie Tools ................................ SKIPPED [INFO] Apache Oozie MiniOozie ............................ SKIPPED [INFO] Apache Oozie Distro ............................... SKIPPED [INFO] Apache Oozie ZooKeeper Security Tests ............. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:02 min [INFO] Finished at: 2015-11-30T17:12:19+08:00 [INFO] Final Memory: 192M/2008M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:2.0-beta-6:site (default) on project oozie-docs: Error parsing site descriptor: Unrecognised tag: 'html' (position: START_TAG seen <html>... @1:6) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :oozie-docs ERROR, Oozie distro creation failed
oozie 报错 coord action is null
但是我的job.properties 里已经指定了workflow.xml的位置了
oozie启动状态成功,可是运行简单的shell脚本报错
这是报的错: 2018-10-18 15:10:17,363 INFO ActionStartXCommand:520 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Next Retry, Attempt Number [3] in [10,000] milliseconds 2018-10-18 15:10:27,408 INFO ActionStartXCommand:520 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[????] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Start action [0000018-181018103446731-oozie-hado-W@shell-b6f7] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2018-10-18 15:10:27,534 WARN ActionStartXCommand:523 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Error starting action [shell-b6f7]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Unknown rpc kind in rpc headerRPC_WRITABLE] org.apache.oozie.action.ActionExecutorException: JA009: Unknown rpc kind in rpc headerRPC_WRITABLE at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:437) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1244) at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1411) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) at org.apache.oozie.command.XCommand.call(XCommand.java:286) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcServerException): Unknown rpc kind in rpc headerRPC_WRITABLE at org.apache.hadoop.ipc.Client.call(Client.java:1504) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:246) at org.apache.hadoop.mapred.$Proxy33.getDelegationToken(Unknown Source) at org.apache.hadoop.mapred.JobClient.getDelegationToken(JobClient.java:2153) at org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:502) at org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:1454) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1192) ... 9 more 2018-10-18 15:10:27,538 WARN ActionStartXCommand:523 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Exceeded max retry count [3]. Suspending Job 2018-10-18 15:10:27,538 WARN ActionStartXCommand:523 - SERVER[3HHadoopMaster1] USER[admin] GROUP[-] TOKEN[] APP[测试脚本] JOB[0000018-181018103446731-oozie-hado-W] ACTION[0000018-181018103446731-oozie-hado-W@shell-b6f7] Suspending Workflow Job id=0000018-181018103446731-oozie-hado-W 脚本内容: #! /bin/bash echo "hello" >> /home/hadoop/text.txt
开启kerberos,oozie执行作业失败
安全模式下,开启kerberos认证,在oozie命令行提交yarn作业执行失败 Client cannot authenticate via [TOkEN,KERBEROS ],Host details:localhost is "hst01";destination Host is "hst03":9000;
oozie配置任务,任务可以执行成功,但是workflow显示的是kill
oozie配置任务,任务可以执行成功,但是workflow显示的是kill,看了一下日志,报错如下: ![图片说明](https://img-ask.csdn.net/upload/201908/08/1565255997_632507.png)![图片说明](https://img-ask.csdn.net/upload/201908/08/1565256009_163062.png) 我看网上说换mysql驱动包,我已经尝试了,发现不是这个问题导致的。 我用的是hue,然后hdp的版本是2.6。
oozie shell action 执行hql 执行的mr是本地模式
各位大神,不知道你们是否有使用过oozie调用shell action ,shell中执行的是hive -e “$hql” ,遇到了一个问题 ,hive生成的mr都是本地的job。不值到哪位大神遇到过。帮忙给点参考意见啊。
使用Oozie执行coordinator定时任务,如果start设得过早,如何避免补跑?
``` <coordinator-app name="zztj-coord=increment-sqoop" frequency="${coord:minutes(5)}" start="2017-08-01T16:00+0800" end="2017-08-01T18:00+0800" timezone="UTC" xmlns="uri:oozie:coordinator:0.2"> <action> <workflow> <app-path>${workflowAppUri}</app-path> <configuration> <property> <name>jobTracker</name> <value>${jobTracker}</value> </property> <property> <name>nameNode</name> <value>${nameNode}</value> </property> <property> <name>queueName</name> <value>${queueName}</value> </property> </configuration> </workflow> </action> </coordinator-app> ``` 比如上面的是我coordinator.xml的配置内容 我在2017-08-01的 17:00启动这个Oozie任务 我期望的是 17:00执行一次 17:05执行一次 17:10执行一次 ...... 然而实际上不是的,我的任务一启动,它会先自动连续着执行12次 分别为 补跑16:00一次 补跑16:05一次 补跑16:10一次 ... 全部补跑完后,才进入正轨去按时间定时执行 请问如何避免? 谢谢
oozie 提交examples中的map-reduce的异常
hadoop安装成功,oozie安装成功,页面显示正常 在提交工作map-reduce时产生异常, 日志信息: 2015-08-21 19:20:44,442 WARN V1JobsServlet:542 - USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] URL[POST http://master:11000/oozie/v1/jobs?action=start] error[E0902], E0902: Exception occured: [Call to master/192.168.52.129:9000 failed on local exception: java.io.IOException: Broken pipe] org.apache.oozie.servlet.XServletException: E0902: Exception occured: [Call to master/192.168.52.129:9000 failed on local exception: java.io.IOException: Broken pipe] at org.apache.oozie.servlet.BaseJobServlet.checkAuthorizationForApp(BaseJobServlet.java:201) at org.apache.oozie.servlet.BaseJobsServlet.doPost(BaseJobsServlet.java:97) 求大神解决啊
oozie定时循环调度spark任务
oozie定时循环调度spark任务,5分钟一次,但是如果spark任务5分钟没有跑完, 下一次spark任务会读到同样的文件内容再跑吗? 问题即 当次任务执行的时间如果大于循环的时间,oozie会不会继续下一次循环,还是会一直等到那一次spark运行完再去执行呢?
oozie调用shell中的问题
大家好,我使用oozie调用shell 。调用简单的shell是已经成功了,比如在shell中 创建hdfs目录,但是我现在shell中,需要调用一个用scala写的jar包,内容如下: nohup /apps/scala-2.10.5/bin/scala /opt/oozie-work/jars/STBLogProduct7.jar hdfs://bigdata1:8020 /user/root/mytest/stblog/ >> /opt/oozie-work/logs/producelog.log 2>&1 &这样调用时不成功的。 我的问题是当使用oozie调用shell,shell又调用别的jar,这个jar包应该放到hdfs还是本地目录,shell的执行是在本地文件系统吗?那么怎么找得到我的jar包呢?
OOZIE与Tez执行兼容问题
最近搭建大数据框架,离线计算部分想用OOZIE作为任务调度。 我的大数据框架hadoop的MapReduce执行在tez上。 在执行OOZIE自带的example时候,如果在hadoop的任务用原生的MapReduce执行,也就是当配置文件mapred-site.xml中mapreduce.framework.name设定为yarn时候,OOZIE执行例子都没有问题,都能成功。 但是当把任务放在tez上执行时,也就是把mapred-site.xml中mapreduce.framework.name设定为yarn-tez时候,执行hadoop的任务,比如跑个原生的wordcount例子,会显示执行引擎是tez并执行成功。如果执行OOZIE例子,就会失败, 出现错误:JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. 把mapreduce.framework.name改回yarn后,再执行OOZIE例子,还是会成功。 网上有用oozie处理hive作业时使用Tez引擎的例子,但是请问所有的任务,不仅仅是 Hive任务用Tez引擎,应该如何做到OOZIE所有任务都和Tez兼容???或者是OOZIE 所有任务这种情况无法和Tze引擎兼容?如果不能兼容请说明原因·······万分感谢!!
CDH安装的oozie,oozie在运行任务时出现如下问题
ActionExecutorException: JA017: Could not lookup launched hadoop Job ID
oozie安装问题求大神指导
安装后端口起不来,查看oozie.log文件发现这个问题,有哪位大神知道是怎么回事吗?![图片说明](https://img-ask.csdn.net/upload/201610/18/1476761215_224224.png) 我去root下面看了,没oozie.keytab这文件。。。。。。
在kerberos环境下使用spark2访问hive报错
2019-05-13 21:27:07,394 [main] WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql - Self-test query [select "DB_ID" from "DBS"] failed; direct SQL is disabled javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) NestedThrowablesStackTrace: java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source) at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.<init>(Unknown Source) at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source) at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1193) at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:345) at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:211) at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:633) at org.datanucleus.store.query.Query.executeQuery(Query.java:1844) at org.datanucleus.store.rdbms.query.SQLQuery.executeWithArray(SQLQuery.java:807) at org.datanucleus.store.query.Query.execute(Query.java:1715) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:371) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:213) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.runTestQuery(MetaStoreDirectSql.java:243) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:146) at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:406) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:338) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:299) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:612) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:578) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:572) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:639) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6869) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:248) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:70) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1700) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3581) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3633) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3613) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3867) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:230) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:387) at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:331) at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:311) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:287) at org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:895) at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:859) at org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1521) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:204) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:268) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:360) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:264) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:197) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:196) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:106) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:94) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:290) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1059) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:137) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:136) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:136) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:133) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632) at com.bigdata_example.oozie.SparkDemo.main(SparkDemo.java:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:181) at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:93) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:101) at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) Caused by: ERROR 42X05: Table/View 'DBS' does not exist. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(Unknown Source) at org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.FromList.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(Unknown Source) at org.apache.derby.impl.sql.compile.DMLStatementNode.bind(Unknown Source) at org.apache.derby.impl.sql.compile.CursorNode.bindStatement(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source) at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source) at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source) ... 113 more 没加kerberos认证,然后报找不到库,我猜是权限不够,然后加了kerberos,又报java.lang.reflect.InvocationTargetException和Caused by:java.lang.NullPointerException
oozie在集群上面的的安装
请问大佬们,我能够在同一集群的不同节点上面分别安装oozie吗
datax部署于hadoop集群并使用oozie调度???
如题,datax为单机应用,有没有大神尝试将其部署在hadoop集群上然后使用oozie调度任务提交至集群?求大神指点。。。
hue上oozie配置spark2,执行任务的时候Application Type不是Spark而是MapReduce
如题,图片如下: ![图片说明](https://img-ask.csdn.net/upload/201908/03/1564813792_360531.png)
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
MyBatis研习录(01)——MyBatis概述与入门
MyBatis 是一款优秀的持久层框架,它支持定制化 SQL、存储过程以及高级映射。MyBatis原本是apache的一个开源项目iBatis, 2010年该项目由apache software foundation 迁移到了google code并改名为MyBatis 。2013年11月MyBatis又迁移到Github。
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
Python爬虫精简步骤1 获取数据
爬虫,从本质上来说,就是利用程序在网上拿到对我们有价值的数据。 爬虫能做很多事,能做商业分析,也能做生活助手,比如:分析北京近两年二手房成交均价是多少?广州的Python工程师平均薪资是多少?北京哪家餐厅粤菜最好吃?等等。 这是个人利用爬虫所做到的事情,而公司,同样可以利用爬虫来实现巨大的商业价值。比如你所熟悉的搜索引擎——百度和谷歌,它们的核心技术之一也是爬虫,而且是超级爬虫。 从搜索巨头到人工...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
web前端javascript+jquery知识点总结
1.Javascript 语法.用途 javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
渗透测试-灰鸽子远控木马
木马概述 灰鸽子( Huigezi),原本该软件适用于公司和家庭管理,其功能十分强大,不但能监视摄像头、键盘记录、监控桌面、文件操作等。还提供了黑客专用功能,如:伪装系统图标、随意更换启动项名称和表述、随意更换端口、运行后自删除、毫无提示安装等,并采用反弹链接这种缺陷设计,使得使用者拥有最高权限,一经破解即无法控制。最终导致被黑客恶意使用。原作者的灰鸽子被定义为是一款集多种控制方式于一体的木马程序...
Python:爬取疫情每日数据
前言 目前每天各大平台,如腾讯、今日头条都会更新疫情每日数据,他们的数据源都是一样的,主要都是通过各地的卫健委官网通报。 以全国、湖北和上海为例,分别为以下三个网站: 国家卫健委官网:http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml 湖北卫健委官网:http://wjw.hubei.gov.cn/bmdt/ztzl/fkxxgzbdgrfyyq/xxfb...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允许使用这...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
粒子群算法求解物流配送路线问题(python)
1.Matlab实现粒子群算法的程序代码:https://www.cnblogs.com/kexinxin/p/9858664.html matlab代码求解函数最优值:https://blog.csdn.net/zyqblog/article/details/80829043 讲解通俗易懂,有数学实例的博文:https://blog.csdn.net/daaikuaichuan/article/...
教你如何编写第一个简单的爬虫
很多人知道爬虫,也很想利用爬虫去爬取自己想要的数据,那么爬虫到底怎么用呢?今天就教大家编写一个简单的爬虫。 下面以爬取笔者的个人博客网站为例获取第一篇文章的标题名称,教大家学会一个简单的爬虫。 第一步:获取页面 #!/usr/bin/python # coding: utf-8 import requests #引入包requests link = "http://www.santostang....
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c# 识别回车 c#生成条形码ean13 c#子控制器调用父控制器 c# 写大文件 c# 浏览pdf c#获取桌面图标的句柄 c# list反射 c# 句柄 进程 c# 倒计时 线程 c# 窗体背景色
立即提问