[HIVE]执行HQL的group by操作报Job status not available错误 10C

在公司测试集群上测试hive查询

情况如下:
正常使用查询语句等其他基本语句,没问题,例如

 select * from tablename;// 正常

但是需求中有操作需要分组,需要使用group by操作

 select name from tablename group by name; // 报错

使用java操作和使用beeline操作报同样的错误,执行语句的时候明显感觉到已经执行了,但是最后无法返回结果,报错,完整错误如下

 Error: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Job status not available 
    at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
    at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257)
    at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
    at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
    at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Job status not available 
    at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:331)
    at org.apache.hadoop.mapreduce.Job.getJobState(Job.java:352)
    at org.apache.hadoop.mapred.JobClient$NetworkedJob.getJobState(JobClient.java:300)
    at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:251)
    at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:559)
    at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:424)
    at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232)
    at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255)
    ... 11 more (state=08S01,code=1)

2个回答

须配置Job History Server相关参数,让Job Client可以读取job最后的执行状态,测试Hadoop版本2.5.0
添加参数vim mapred-site.xml

mapreduce.jobhistory.address
master.hadoop:10020


yarn.app.mapreduce.am.staging-dir
/tmp/hadoop-yarn/staging


mapreduce.jobhistory.intermediate-done-dir
${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate


mapreduce.jobhistory.done-dir
${yarn.app.mapreduce.am.staging-dir}/history/done

你yarn中去看看executor的执行日志,有可能是内存溢出

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hive HQL 操作总结
hive 的log path(linux) 在 /tmp/username/ 调用hive: hive -f script.sql hive -e 'select * from table' hive --config /user/another/config/directory hive -hiveconf fs.defaultFS=hdfs://localhost   HQL 和...
执行job是报ORA-00932错误
[code=SQL]rnrndeclarern job_no pls_integer; --jobidrn l_str_interval varchar2(200); --intervalrn l_dt_starttime date; --whenrnbeginrn l_dt_starttime := add_months(trunc(sysdate,'mm'),1);rn l_str_interval := to_char(add_months(trunc(sysdate), 1), 'yyyymm')||'01000000'; rn dbms_job.submit(job_no, 'null;', l_dt_starttime, l_str_interval);rn commit;rnend;rnrn --错误信息:rn ORA-00932:inconsistent datatype3: expected DATE got NUMBERrn ORA-06512:at "SYS.DMMS_JOB",line 57rn ORA-06512:at "SYS.DMMS_JOB",line 134rnrn[/code]rnrn ORA-00932: 数据类型不一致: 应为 DATE, 但却获得 NUMBER
Hive的HQL操作
Hive的简单使用 创建表语句 create table city(province_code int, province_name string, city_code int, city_name string ) row FORMAT delimited fields terminated by ',' lines terminated by '\n'; 如果需要将本地文件加载
报HTTP Status 404 - Servlet action is not available 错误
我在struts-config.xml中配置 rn rn rn rn后,我的action就没有响应了, rn报HTTP Status 404 - Servlet action is not available 错误
shell调用hive及执行HQL
脚本是通过shell调用hive及执行HQL hive  <<EOFselect * from smit_opentime limit 10;exit;EOF
捕捉hive hql执行错误信息
我需要捕捉hql在执行过程中的错误信息,但不知道如何捕捉,请各位指教。如下执行命令:hive -e 'select count(1) from xxx' > z.txt,显示为:rn-bash-4.1$ hive -e 'select count(1) from xxx' > z.txtrnLogging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hive/lib/hive-common-0.10.0-cdh4.5.0.jar!/hive-log4j.propertiesrnHive history file=/tmp/hadoop/hive_job_log_c8bc274f-7c09-4265-9e4e-6f194f82675b_768410551.txtrnFAILED: SemanticException [Error 10001]: Line 1:21 Table not found 'xxx'rn-bash-4.1$ cat z.txtrnrn说明:表xxx是不存在的表,所以在执行的时候报了一个错误:FAILED: SemanticException [Error 10001]: Line 1:21 Table not found 'xxx',但我通过重定向到z.txt的方式根本捕捉不到这个错误信息,请问要如何做才能捕捉这个错误信息?
HTTP Status 503 - This application is not currently available错误
我更新服务器代码后,访问时出现rnHTTP Status 503 - This application is not currently availablernrn--------------------------------------------------------------------------------rnrntype Status reportrnrnmessage This application is not currently availablernrndescription The requested service (This application is not currently available) is not currently available.rn这个错误,我用的是tomcat。rn我把项目部署成新的路径(Web Context-root)再上传,访问时也会有这个错误。可是我上传个test小项目,是可以正常访问的。rn要是我把出错的项目部署成test,把可以正常访问的test覆盖掉,这样也会报错。rn我的配置文件啥的都没修改过,希望大家能帮帮忙。rnrn
hive查询报错误
hive> select ip,time from dataclean; Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1437186236276_0047, Tracking URL
执行job时候的错误
create or replace procedure scott.TestPM isrnbeginrn Delete From sm;rnend TestPM;rnrnrnrn以system/manager登录rnsql>variable n number;rnrnsql>begin rndbms_job.submit(:n,'scott.TestPM;',sysdate, rn'sysdate+(2*60/(24*60*60))'); rncommit; rnend; rnrn完了以后,执行这个job,怎么出现一下错误!rnrnORA-12011: execution of 1 jobs failedrnORA-06512: at "SYS.DBMS_IJOB", line 394rnORA-06512: at "SYS.DBMS_JOB", line 276rnrn不知道怎么回事! 拜托各位了!
为什么还是报 GROUP BY 错误?
rnCREATE CURSOR tt1 ( plu_amt Y, plu_disc Y, sell_dt T, clerkid I, clerk_no C(4) )rnCREATE CURSOR tt2 ( id I, rcvno C(8), name C(10) )rnrnSELECT A.clerkid AS id, ;rn CAST( NVL( B.rcvno, A.clerk_no ) AS I ) AS rcvno, ;rn CAST( NVL( B.name, '' ) AS C(10)) AS cname, ;rn SUM( A.plu_amt ) AS amt, ;rn SUM( A.plu_disc ) AS disc, ;rn TTOD( A.sell_dt ) AS clr_date ;rn FROM tt1 A LEFT JOIN tt2 B ON A.clerkid == B.id ;rn GROUP BY id, rcvno, cname, clr_date ;rn ORDER BY clr_date, rcvno, id, cname ;rn INTO CURSOR _resultrnrn执行下面 SQL 语句报 "GROUP BY clause is missing or invalid"rnrn我知道 SET ENGINEBEHAVIOR 70 可以跳过,但我想我的 SQL 语句更规范点,所以设置了 SET ENGINEBEHAVIOR 90,可是我的 GROUP BY 中已经包含了所有非聚合列,为什么还是报错?请高手指点,谢谢!rnrnrn
hive执行错误解决方法
报错如下:INFO : Cleaning up the staging area /data/user/hive/.staging/job_1530585043265_0202 ERROR : Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission den...
SSH报HTTP Status 404 - Servlet action is not available的错
在网上以经看了N多解决方式了,还是不行..rn--WEB.XML--rnrnrn rn actionrn org.apache.struts.action.ActionServletrn rn configrn /WEB-INF/struts-config.xmlrn rn rn debugrn 3rn rn rn detailrn 3rn rn 0rn rn rn actionrn *.dorn rn rn SetFilterrn org.struts.SetFilterrn rn rn SetFilterrn /*rn rnrn
spark 在yarn执行job时一直报0.0.0.0:8030错误
按照常规思路,首先检查配置文件:yarn-site.xml 。查看里面配置的yarn.resourcemanager.scheduler.address 是否为master。<property> <name>yarn.resourcemanager.hostname</name> <value...
tocmat9 报 insufficient free space available 错误
ARNING [localhost-startStop-1] org.apache.catalina.webresources.Cache.getResource Unable to add the resource at [/WEB-INF/lib/spring-webmvc.jar] to the cache because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cachernrn类似的错误,所有的jar都会出现,是什么原因,该如何修正。
spring+hibernate+mysql执行HQL查询报未打开游标错误
spring配置:rn[code="java"]rnrn rn java:comp/env/jdbc/MYrn rnrnrnrn rn rn rn rn org.hibernate.dialect.MySQL5InnoDBDialectrn rn true rn rn 0 rn 100rn 50rn org.hibernate.cache.HashtableCacheProviderrn rn rn rn rnrnrn[/code]rnrnrn执行HibernateTemplate的HQL查询语句(如:getHibernateTemplate().find("from SysUser")方法)时候报错: The statement (1) has no open cursor.rnrn有遇到过同样问题的吗?
Hive的hql命令的三种执行方式
Hive的hql命令的三种执行方式: 1、CLI 方式直接执行 shell 中键入hive,即可启动hive的cli交互模式 2、作为字符串通过shell调用hive –e执行(-S开启静默,去掉”OK”,”Time taken”) hql作为字符串在shell脚本中执行,如 hive -e "use ${database};select * from tb" ...
hive 虚拟机下执行 HQL 被killed退出
背景:学习大数据,装了个虚拟机,搭载centos7的迷你,内存分配的512MB hive> select A.name,A.time     > from     > (select name,time,sum(count) from sale group by name, time) A join     > (select name,time,sum(count) from sal
Hive map reduce not working with YARN - Could not find status of job
Hive map reduce not working with YARN - 'java.io.IOException(Could not find status of job:)rnrnUser Name: horkyrnQueue: root.horkyrnState: SUCCEEDEDrnUberized: falsernSubmitted: Thu Aug 14 09:01:45 CEST 2014rnStarted: Thu Aug 14 09:02:01 CEST 2014rnFinished: Thu Aug 14 09:17:51 CEST 2014rnIn Hive console, the end of a job looks always like this:rn2014-08-14 09:17:50,306 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 180775.31 secrn2014-08-14 09:17:51,381 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 180776.23 secrn2014-08-14 09:17:52,450 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 180776.98 secrnjava.io.IOException: Could not find status of job:job_1406787755131_0048rnat org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:297)rnat org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:541)rnat org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:431)rnat org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)rnat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)rnat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)rnat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1485)rnat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1263)rnat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091)rnat org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)rnat org.apache.hadoop.hive.ql.Driver.run(Driver.java:921)rnat org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)rnat org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)rnat org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)rnat org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)rnat org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)rnat org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)rnat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rnat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)rnat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)rnat java.lang.reflect.Method.invoke(Method.java:597)rnat org.apache.hadoop.util.RunJar.main(RunJar.java:212)rnEnded Job = job_1406787755131_0048 with exception 'java.io.IOException(Could not find status of job:job_1406787755131_0048)'rnFAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTaskrnIt seems to me that Hive keeps checking for status of the job for which it needs to read job's xml configuration file over and over. But when the job truly finishes, the xml conf file is deleted and Hive crashes because of that.rnThe audit log of active Namenode looks like this (grepped for job number):rn2014-08-14 09:17:51,364 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:51,716 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=create src=/user/horky/.staging/job_1406787755131_0048/COMMIT_STARTED dst=null perm=horky:tycho:rw-r-r-rn2014-08-14 09:17:51,723 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=create src=/user/horky/.staging/job_1406787755131_0048/COMMIT_SUCCESS dst=null perm=horky:tycho:rw-r-r-rn2014-08-14 09:17:51,769 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=create src=/user/history/done_intermediate/horky/job_1406787755131_0048.summary_tmp dst=null perm=horky:hadoop:rw-r-r-rn2014-08-14 09:17:51,789 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=setPermission src=/user/history/done_intermediate/horky/job_1406787755131_0048.summary_tmp dst=null perm=horky:hadoop:rwxrwx---rn2014-08-14 09:17:51,810 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job_1406787755131_0048_1.jhist dst=null perm=nullrn2014-08-14 09:17:51,811 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/history/done_intermediate/horky/job_1406787755131_0048-1407999705247-horky-SELECT+COUNT%28*%29+FROM+urlinfodump_2014_0...10%28Stage-140800067rn2014-08-14 09:17:51,816 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job_1406787755131_0048_1.jhist dst=null perm=nullrn2014-08-14 09:17:51,817 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/history/done_intermediate/horky/job_1406787755131_0048-1407999705247-horky-SELECT+COUNT%28*%29+FROM+urlinfodump_2014_0...10%28Stage-140800067rn2014-08-14 09:17:51,818 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=open src=/user/horky/.staging/job_1406787755131_0048/job_1406787755131_0048_1.jhist dst=null perm=nullrn2014-08-14 09:17:51,832 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=create src=/user/history/done_intermediate/horky/job_1406787755131_0048-1407999705247-horky-SELECT+COUNT%28*%29+FROM+urlinfodump_2014_0...10%28Stage-140800067rn2014-08-14 09:17:51,871 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=setPermission src=/user/history/done_intermediate/horky/job_1406787755131_0048-1407999705247-horky-SELECT+COUNT%28*%29+FROM+urlinfodump_2014_0...10%28Stage-1rn2014-08-14 09:17:51,872 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job_1406787755131_0048_1_conf.xml dst=null perm=nullrn2014-08-14 09:17:51,873 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/history/done_intermediate/horky/job_1406787755131_0048_conf.xml_tmp dst=null perm=nullrn2014-08-14 09:17:51,874 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job_1406787755131_0048_1_conf.xml dst=null perm=nullrn2014-08-14 09:17:51,875 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=getfileinfo src=/user/history/done_intermediate/horky/job_1406787755131_0048_conf.xml_tmp dst=null perm=nullrn2014-08-14 09:17:51,876 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=open src=/user/horky/.staging/job_1406787755131_0048/job_1406787755131_0048_1_conf.xml dst=null perm=nullrn2014-08-14 09:17:51,879 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=create src=/user/history/done_intermediate/horky/job_1406787755131_0048_conf.xml_tmp dst=null perm=horky:hadoop:rw-r-r-rn2014-08-14 09:17:51,910 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=setPermission src=/user/history/done_intermediate/horky/job_1406787755131_0048_conf.xml_tmp dst=null perm=horky:hadoop:rwxrwx---rn2014-08-14 09:17:51,916 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=rename src=/user/history/done_intermediate/horky/job_1406787755131_0048.summary_tmp dst=/user/history/done_intermediate/horky/job_1406787755131_0048.summarrn2014-08-14 09:17:51,922 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=rename src=/user/history/done_intermediate/horky/job_1406787755131_0048_conf.xml_tmp dst=/user/history/done_intermediate/horky/job_1406787755131_0048_conf.xrn2014-08-14 09:17:51,925 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=rename src=/user/history/done_intermediate/horky/job_1406787755131_0048-1407999705247-horky-SELECT+COUNT%28*%29+FROM+urlinfodump_2014_0...10%28Stage-140800067rn2014-08-14 09:17:52,412 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=open src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,423 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,423 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,424 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,425 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,426 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,426 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,436 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,437 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.114 cmd=getfileinfo src=/user/horky/.staging/job_1406787755131_0048/job.xml dst=null perm=nullrn2014-08-14 09:17:52,973 INFO FSNamesystem.audit: allowed=true ugi=horky (auth:SIMPLE) ip=/A.B.C.97 cmd=delete src=/user/horky/.staging/job_1406787755131_0048 dst=null perm=nullrnrn任务在执行过程中突然不能找到job status,查询了很多论坛,都没有得到反馈,是job_conf配置问题还是?rnrn任何帮助将不胜感激!
hive中Hql查询时错误
查询时错误  hive> select count(*) from student_info; Query ID = hadoop_20170127022626_4c071a70-f0d0-484a-8e29-dbd875356ef9 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determ
Hadoop生态圈(七)Hive之HQL操作讲解
今天来初步学习一下hive的操作 主要内容包括: -hive的访问方法; HQL创建数据库、表和视图的方法 4.3 Hive的数据类型介绍 Hive的基本数据类型 Hive的集合数据类型 应用HiveQL完成以下操作 创建数据库 ①创建数据库hive hive> create database hive; 查看hive下已有的数据库: hive>Show databases; ...
HQL group by ? 问题
一句HQL语句,"select module from Log group by ?"rn然后设置?的值rn但是执行的时候报错rnrn15006 [[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN org.hibernate.util.JDBCExceptionReporter - SQL Error: 979, SQLState: 42000rn15007 [[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR org.hibernate.util.JDBCExceptionReporter - ORA-00979: 不是 GROUP BY 表达式rnrnorg.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; SQL [select log0_.MODULE as col_0_0_ from T_LOG log0_ group by ?]; nested exception is org.hibernate.exception.SQLGrammarException: could not execute queryrn rnrn难道group by不能使用?么,还有什么其他的解决方法rn
Hive学习笔记 --- 执行Hive操作的几种方法
Hive学习笔记 --- 执行Hive操作的几种方法
执行检索的时候报“试图执行未经授权的操作”错误
System.Net.NetworkCredential Credential = new System.Net.NetworkCredential("administrator", "1", "contoso.com");rnqueryService.Credentials = Credential;rnrnstring xml = queryService.Query(xmlQuery);rn
oracle创建job报01008错误的原因
1.由于第一次在oracle上创建job,在网上百度了写法,结果一直报错      declare job2010 number;      begin     dbms_job.submit(:job2010,'p_day_price_record;',sysdate,'sysdate+1/(24*60*60)');      end; 2.最终查明原因,是第一个参数,
Hive的查询操作(group by , join, 多表连接)
目录 数据准备: 分组实操练习: Join连接实操练习: 多表的连接实操练习: 数据准备: 1,创建emp表和dept表,并向其中导入如下的数据: create database db_select; use db_select; create table if not exists dept( deptno int, dname string, loc int ) row fo...
hive执行删除表操作报错
在hive执行drop table table_name时报下错SemanticException Unable to fetch table movie. For direct MetaStore DB connections, we don't support retries at the client level. 报错原因: hive是基于mysql的数据库,需要连接mysql,由于m...
求救 错误:HTTP Status 404 - Servlet Test is not available
求救 错误:HTTP Status 404 - Servlet Test is not available 访问地址http://localhost:8080/myapp/Test 错误rn建立自己的jsp app目录rn  rn  1.到Tomcat的安装目录的webapps目录,可以看到ROOT,examples, tomcat-docs之类Tomcat自带的的目录;rn  2.在webapps目录下新建一个目录,起名叫myapp;rn  3.myapp下新建一个目录WEB-INF,注意,目录名称是区分大小写的;rn  4.WEB-INF下新建一个文件web.xml,内容如下:rn  rn  rn  rn  rn  My Web Applicationrn  rn  A application for test.rn  rn  rn  rn  5.在myapp下新建一个测试的jsp页面,文件名为index.jsp,文件内容如下:rn  rn  Now time is: <%=new java.util.Date()%>rn  rn  rn  6.重启Tomcatrn  rn  7.打开浏览器,输入http://localhost:8080/myapp/index.jsp 看到当前时间的话说明就成功了。rn  rn  第四步:建立自己的Servlet:rn  rn  1.用你最熟悉的编辑器(建议使用有语法检查的java ide)新建一个servlet程序,文件名为Test.java,文件内容如下:rn  rn  package test;rn  import java.io.IOException;rn  import java.io.PrintWriter;rn  import javax.servlet.ServletException;rn  import javax.servlet.http.HttpServlet;rn  import javax.servlet.http.HttpServletRequest;rn  import javax.servlet.http.HttpServletResponse;rn  public class Test extends HttpServlet rn  protected void doGet(HttpServletRequest request, HttpServletResponse response)rn  throws ServletException, IOException rn  PrintWriter out=response.getWriter();rn  out.println(" This is a servlet test.");rn  out.flush();rn  rn  rn  rn  2 .编译rn  将Test.java放在c:\test下,使用如下命令编译:rn  rn  C:\Test>javac Test.javarn  rn  然后在c:\Test下会产生一个编译后的servlet文件:Test.classrn  rn  3 .将结构test\Test.class剪切到%CATALINA_HOME%\webapps\myapp\WEB-INF\classes下,也就是剪切那个test目录到classes目录下,如果classes目录不存在,就新建一个。 现在webapps\myapp\WEB-INF\classes下有test\Test.class的文件目录结构rn  rn  4 .修改webapps\myapp\WEB-INF\web.xml,添加servlet和servlet-mappingrn  rn  编辑后的web.xml如下所示,红色为添加的内容:rn  rn  rn  rn  rn  My Web Applicationrn  rn  A application for test.rn  rn  rn  Testrn  Testrn  A test Servletrn  test.Testrn  rn  rn  Testrn  /Testrn  rn  rn  rn  这段话中的servlet这一段声明了你要调用的Servlet,而servlet-mapping则是将声明的servlet"映射"到地址/Test上rn  rn  5 .好了,重启动Tomcat,启动浏览器,输入http://localhost:8080/myapp/Test 如果看到输出This is a servlet test.就说明编写的servlet成功了。rn  rn  注意:运行到最后出现错误
SSH整合 HTTP Status 404 - Servlet action is not available 错误
[size=14px]SSH整合 出现HTTP Status 404 - Servlet action is not available 错误,请高手指点[/size][b]web.xml[/b]rnrnrn rn actionrn org.apache.struts.action.ActionServletrn rn configrn /WEB-INF/struts-config.xmlrn rn rn debugrn 3rn rn rn detailrn 3rn rn 0rn rn rn actionrn *.dorn rn rn index.jsprn rnrnrn[b]stuts-config.xml[/b]rnrnrnrn rn rn rn rn rn rn rn rn rn rn rn rn rn rn rn rn rn rn rnrnrn[b]applicationContext.xml[/b]rnrnrnrn rn rn rn rn rn rn rn rn rn rn rn rn rn rn rn org.hibernate.dialect.SQLServerDialectrn rn rn rn rn rn com/ssh/vo/User.hbm.xmlrn rn rn rn rn rn rn rn rn rn rn rn rn rn rnrn[b]UserAction.java[/b]rnpublic class UserAction extends Action rn private static String SUCCESS = "success";rn private static String FAILURE = "failure";rn private UserService userService;rn rn public UserService getUserService() rn return userService;rn rn public void setUserService(UserService userService) rn this.userService = userService;rn rn public ActionForward execute(ActionMapping mapping, ActionForm form,rn HttpServletRequest request, HttpServletResponse response) rn UserForm uf = (UserForm) form;// TODO Auto-generated method stubrn Logger log = Logger.getLogger(UserAction.class);rn log.warn("%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%");rn User user = new User();rn user.setUsername(uf.getUsername());rn user.setPassword(uf.getPassword());rn user.setEmail(uf.getEmail());rn if(userService.registe(user) != null)rn rn return mapping.findForward(SUCCESS);rn rn elsern rn return mapping.findForward(FAILURE);rn rn rnrn[b]index.jsp[/b]rn<%@ page language="java" pageEncoding="ISO-8859-1"%>rn<%@ taglib uri="http://struts.apache.org/tags-bean" prefix="bean"%> rn<%@ taglib uri="http://struts.apache.org/tags-html" prefix="html"%>rn rn rn JSP for UserForm formrn rn rn rn username : rn password : rn email : rn rn rn rnrnrn[b]UserService.java[/b]rnpublic class UserService rn private UserDAO userDAO ;rnrn public UserDAO getUserDAO() rn return userDAO;rn rnrn public void setUserDAO(UserDAO userDAO) rn this.userDAO = userDAO;rn rn //用户注册rn public User registe(User user)rn rn userDAO.save(user);rn return user;rn rnrn
Hive数据库语言Hql
首先我们要知道操作Hive数据库的院类似于sql语言但是有些也不同于sql的语言,所以我们就叫他Hql,这里我就介绍一些常用的语言以及我在写语言时的错误。 一、创建表 create table 表名(属性) ROW FORMAT DELIMITED 行格式分隔 FIELDS TERMINATED BY ’,’ 字段之间使用,来分隔 COLLECTION ITEMS TERMINATED BY...
Hive之——HQL 数据定义
转载请注明出处:https://blog.csdn.net/l1028386804/article/details/82469740 一、创建表 1、创建Hive管理表 create table student( name string, age int, cource array&amp;lt;string&amp;gt;, body map&amp;lt;string, float&amp;gt;, address ...
Hive之HQL数据查询
------------本文笔记整理自《Hadoop海量数据处理:技术详解与项目实战》范东来 一、select...from语句 --支持列和表的别名,支持嵌套,限行 &gt; select l.name ln, r.course rc &gt; from (select id, name from left) l &gt; join (select id, course from righ...
【Hive五】HQL查询
1. 查询语句组成   2. 查询语句关键字含义 2.1 LIMIT 类似于MySQL的LIMIT,用于限定查询记录数   2.2 WHERE 类似于MySQL的WHERE,用于指定查询条件   2.3 GROUP BY 分组查询   2.4 ORDER BY 全局排序 仅仅动一个reduce task 速度可能会非常慢 Strict模式下,必须与limit连...
Hive (HQL)基本用法
DDL(data defination language) 1,创建数据库 create database test_db; use test_db; 说明,这个创建语法和mysql一样,创建一个数据库,名字是test_db,在fs中就表现是一个文件夹:/user/hive/warehouse/test_db.db 2,创建一个table,注意Table有点区别,先声明变量,后跟类...
Hive的基本hql语句
1.创建表 数据类型: data_type primitive_type 原始数据类型 | array_type 数组 | map_type map | struct_type | union_type – (Note: Available in Hive 0.7.0 and later) primitive_type TINYINT | SMALLINT | INT ...
hive之HQL 排序
hive之HQL 排序查询员工信息:员工号 姓名 月薪 按月薪排序select empno,ename,sal from emp order by sal; 若在尾部加上desc,按降序排列排序操作要被转换成mapreduce作业,order by 后面可跟:列、表达式、别名或序号。select empno,ename,sal,sal*12 from emp order by sal*12; 按年...
hive的基本使用及HQL
1、 创建库:create database if not exists mydb;        创建库的时候带注释            create database if not exists dbname comment 'create my db named dbname';        创建带属性的库            create database if not exists...
Hive的体系结构之HQL的执行过程
一 一条HQL语句如何在hive中进行查询 解释器、编译器、优化器完成HQL查询语句从词法分析、语法分析、编译、优化以及查询计划的生成。生成的查询计划存储在HDFS中,并在随后有MapReduce调用执行。     二 怎样查看oracle的执行计划 1、不创建索引的执行计划   2、创建索引的执行计划     Hive的执行计划和oracle的执行计划类似
Hive的shell命令以及HQL
Hive的shell命令以及HQL 1、CLI ①hive----直接进入hive命令行界面,在hive命令行界面可以使用以下命令: exit;/quit;  -->都是退出hive reset; -->重置hive配置。主要是针对set和hive --hiveconf的 set (name)=(value); -->set命令,用来更改hive-site.xml中的配置,name和va
Hive的HQL的基本操作
HQL的基本命令(全): 1、查看数据库    show databases;  //查看已经存在的数据库    describe database test;   //查看某个已经存在的数据库   2、创建数据库    create database test;    create database if not exists test;    create databa
Hive之HQL数据定义
------------本文笔记整理自《Hadoop海量数据处理:技术详解与项目实战》范东来 HQL数据定义 1.数据库database操作 --创建数据库 &gt; create database test; &gt; create database if not exists test; --查看已存在的数据库 &gt; show databases; --注:数据库在HDFS中的目...
相关热词 c# 标准差 计算 c#siki第五季 c#入门推荐书 c# 解码海康数据流 c# xml的遍历循环 c# 取 查看源码没有的 c#解决高并发 委托 c#日期转化为字符串 c# 显示问号 c# 字典对象池