急!!!!CM配置hive on spark后执行总是报错 10C

急求大神抬一手。。。。。

2017-11-10 14:50:13,485 INFO [main]: ql.Driver (SessionState.java:printInfo(986)) - Launching Job 3 out of 5
2017-11-10 14:50:13,485 INFO [main]: ql.Driver (Driver.java:launchTask(1977)) - Starting task [Stage-1:MAPRED] in serial mode
2017-11-10 14:50:13,485 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - In order to change the average load for a reducer (in bytes):
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - set hive.exec.reducers.bytes.per.reducer=
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - In order to limit the maximum number of reducers:
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - set hive.exec.reducers.max=
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - In order to set a constant number of reducers:
2017-11-10 14:50:13,486 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - set mapreduce.job.reduces=
2017-11-10 14:50:13,676 INFO [main]: exec.Task (SessionState.java:printInfo(986)) - Starting Spark Job = d2163859-72ee-4a25-8e27-e566d8ab548c
2017-11-10 14:51:06,982 WARN [RPC-Handler-3]: client.SparkClientImpl (SparkClientImpl.java:rpcClosed(131)) - Client RPC channel closed unexpectedly.
2017-11-10 14:52:06,901 **WARN [main]: impl.RemoteSparkJobStatus (RemoteSparkJobStatus.java:getSparkJobInfo(152)) - Failed to get job info.
java.util.concurrent.TimeoutException**
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49)

at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:150)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getState(RemoteSparkJobStatus.java:82)
at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:80)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:109)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2017-11-10 14:52:06,915 ERROR [main]: status.SparkJobMonitor (RemoteSparkJobMonitor.java:startMonitor(132)) - Failed to monitor Job[ 2] with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.util.concurrent.TimeoutException)'
org.apache.hadoop.hive.ql.metadata.HiveException: java.util.concurrent.TimeoutException
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:153)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getState(RemoteSparkJobStatus.java:82)
at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:80)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:109)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.util.concurrent.TimeoutException
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:150)
... 24 more
2017-11-10 14:52:06,915 ERROR [main]: status.SparkJobMonitor (SessionState.java:printError(995)) - Failed to monitor Job[ 2] with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.util.concurrent.TimeoutException)'
org.apache.hadoop.hive.ql.metadata.HiveException: java.util.concurrent.TimeoutException

at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:153)```
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getState(RemoteSparkJobStatus.java:82)
at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:80)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:109)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.util.concurrent.TimeoutException
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49)
at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getSparkJobInfo(RemoteSparkJobStatus.java:150)
... 24 more

2017-11-10 14:52:06,931 ERROR [main]: ql.Driver (SessionState.java:printError(995)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask```

1个回答

hadoop确保正常运行,并需要把hadoop的配置文件和hive的配置文件放在spark的配置文件目录下边

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hive on spark查询报错。
求助!!!在hadoop使用Hive on spark执行Bigbench测试时,一直会有报错,log信息: FAILED: SemanticException Failed to get a spark session: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client. WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. An error occured while running command: ========== runEngineCmd -f /var/lib/hadoop-hdfs/Big-Bench/engines/hive/queries/q04/q04.sql ========== 在网上查了很多资料,有说版本不匹配的,有说是概率性问题,有没有大佬来瞅一眼啊。。哭了
Hive on Spark下无法处理Parquet表
我在使用Hive on Spark时,在搭建过程中不段踩坑,网上资料也有,但都是千篇一律,点到为止,欲言又止,明明在说却又故意不说清楚的那种,看着让人很蛋疼. 过程是这样的,我在Spark的官网查到,要使用Hive on Spark必须有一个不包含一个Hive的Spark部署包,而官网上的都是带Hive的,那么就只剩一个办法了,自己编译 编译的方法有很多种,官方只要介绍有三种,一种是Spark自带的make-distribution.sh编译工具,第二种是使用Maven编译,第三那种是使用SBT去编译,我一开始选择了Spark自带的make-distribution.sh编译工具,编译过程是令人发疯的,不断报错,不断报错,最后还是让我编译成功了,我的方法是,报错了,重新指令指令编译,不断重复次步骤. 我在用make-distribution.sh编译工具时的指令如下: ./make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.6,parquet-provided" 但是编译出来的spark-assembly-*.jar 包只有106M,然后安装部署spark后,却连启动都报错,我就去上网找资料,但是在网上找的资料是,有人通过make-distribution.sh编译工具编译,但是他竟然成功了,完全没报错?????有人也是通过make-distribution.sh编译工具编译的,结果跟我一样,也是报错,他后来才用Maven编译,成功了,没办法,我通过make-distribution.sh编译不成功只能也用Maven编译,后来确实也编译成功了,安装,运行一点问题都没有,我用Maven编译的指令如下: mvn -Phadoop-2.6 -Pyarn -Dhadoop.version=2.6.5 -Dyarn.version=2.6.5 -Dscala-2.10 -DskipTests clean package 当我以为一切都搞定了的时候,问题又来了,因为我需要用到将Hive中的数据以parquet格式进行存储,到了这时它又报错了,报错信息如下: Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException Caused by: java.lang.reflect.InvocationTargetException Caused by: java.lang.NoSuchMethodError:org.apache.parquet.schema.Types$MessageTypeBuilder.addFields([Lorg/apache/parquet/schema/Type;)Lorg/apache/parquet/schema/Types$BaseGroupBuilder; 然后我就去找资料,百度,Google,Bing都找过了,愣是没找到问题在哪里,我就懵逼了,到底这个问题怎么解决啊? 求前辈赐教
我把hive-site.xml放进spark/conf/里后报了一堆警告,怎么处理,不处理有影响吗?
之前配置的时候一直没发现忘记把hive-site.xml配置文件放到spark/conf中,今天把文件放进去,结果一打开pyspark就报一堆错,使用sparksql的时候也是报一堆警告,警告如下: 因为太长,所以先把想法写在这。我想知道怎么可以把这个提示的等级调高,或者怎么可以解决这些警告,麻烦大佬们帮忙看看,谢谢! ```shell To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.metastore.client.capability.check does not exist 2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist 2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.druid.broker.address.default does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve-fraction.min does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.orc.splits.ms.footer.cache.ppd.enabled does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.metastore.event.message.factory does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.hs2.user.access does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.druid.storage.storageDirectory does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.dynamic.semijoin.reduction.threshold does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.connect.retry.limit does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.xmx.headroom does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.dynamic.semijoin.reduction does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.direct does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.stats does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.client.consistent.splits does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.start does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.ttl does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.management.acl does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.delegation.token.lifetime does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.guidKey does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.ats.hook.queue.capacity does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.strict.checks.large.query does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.bigtable.minsize.semijoin.reduction does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.min does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.user does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.alloc.size does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.wait.queue.comparator.class.name does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.port does not exist 2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.orc.cache.use.soft.references does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.enabled does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve.fraction.max does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.listener.thread-count does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.tez.container.max.java.heap.fraction does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.stats.column.autogather does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.am.liveness.heartbeat.interval.ms does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.decoding.metrics.percentiles.intervals does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.groupby.position.alias does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.metastore.txn.store.impl does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.spark.use.groupby.shuffle does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.object.cache.enabled does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.parallel.ops.in.session does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.groupby.limit.extrastep does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.ssl does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.location does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.delay.seconds does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.materializedview.fileformat does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.file.cleaner.threads does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.test.fail.compaction does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.blobstore.use.blobstore.as.scratchdir does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.service.metrics.class does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.mmap.path does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.download.permanent.fns does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.max.historic.queries does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.vectorized.execution.reducesink.new.enabled does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.compactor.max.num.delta does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.attempted does not exist 2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.port does not exist 2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.compactor.initiator.failed.compacts.threshold does not exist 2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.service.metrics.reporter does not exist 2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.max.pending.writes does not exist 2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.execution.mode does not exist 2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.enable.grace.join.in.llap does not exist 2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.mode does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.io.threadpool.size does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.druid.select.threshold does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.scratchdir.lock does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.spnego does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.frequency does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.hs2.coordinator.enabled does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.timeout.seconds does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.filter.stats.reduction does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.exec.orc.base.delta.ratio does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.fastpath does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.clear.dangling.scratchdir does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.test.fail.heartbeater does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.file.cleanup.delay.seconds does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.management.rpc.port does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.mapjoin.hybridgrace.bloomfilter does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.tree does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.stats.ndv.tuner does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.query.length does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.failed does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.close.session.on.disconnect does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.ppd.windowing does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.initial.metadata.count.enabled does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.orc.splits.ms.footer.cache.enabled does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup.min does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.file.metadata.threads does not exist 2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.service.refresh.interval.sec does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.output.size does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.driver.parallel.compilation does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.remote.token.requires.signing does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.cache.allow.synthetic.fileid does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.hash.table.inflation.factor does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.hbase.ttl does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.vectorized does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.writeset.reaper.interval does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.vector.serde.deserialize does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.order.columnalignment does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.send.buffer.size does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.exec.schema.evolution does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.elements.values.clause does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.server2.llap.concurrent.queries does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.allow.uber does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.partition.size.max does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.auth does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.orc.splits.include.fileid does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.communicator.num.threads does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.orderby.position.alias does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.sleep.between.retries.ms does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.partitions does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.service.metrics.hadoop2.component does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.shuffle.port does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.elements.in.clause does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.druid.passiveWaitTimeMs does not exist 2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.load.dynamic.partitions.thread does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.segments.granularity does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.response.header.size does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.conf.internal.variable.list does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductionpercentage does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.repl.cm.enabled does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.limit does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.serialize.in.tasks does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.query.timeout.seconds does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.service.metrics.hadoop2.frequency does not exist 2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.orc.splits.directory.batch.ms does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.reader.wait does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.max.timeout.ms does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.max.open.txns does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.reduce.side does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.server2.zookeeper.publish.configs does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.auto.convert.join.hashtable.max.entries does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.server2.tez.sessions.init.threads does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.metastore.authorization.storage.check.externaltable.drop does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.execution.mode does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.cbo.cnf.maxnodes does not exist 2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.vectorized.adaptor.usage.mode does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.materializedview.rewriting does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.groupMembershipKey does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.catalog.cache.size does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.cbo.show.warnings does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.fshandler.threads does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.max.bloom.filter.entries does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.metadata.fraction does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.materializedview.serde does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.wait.queue.size does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.cache.entries does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.txn.operational.properties does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.memory.ttl does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.port does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.nonvector.wrapper.enabled does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.cache.size does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.vectorized.input.format does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.optimize.cte.materialize.threshold does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.clean.until does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.optimize.semijoin.conversion does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.port does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.metrics.enabled does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.repl.rootdir does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.limit.partition.request does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.async.log.enabled does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.logger does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.allow.udf.load.on.demand does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.cli.tez.session.async does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.bloom.filter.factor does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.am-reporter.max.threads does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.use.file.size.for.mapjoin does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning.compat does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.webui.spnego.principal does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.preemption.metrics.intervals does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.shuffle.dir.watcher.enabled does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.arena.count does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.use.SSL does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.timeout.ms does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.transpose.aggr.join does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.maxTries does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning.max.data.size does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.metadata.base does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.invalidator.frequency does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.lrfu does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.mmap does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.coordinator.address.default does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.max.fetch.size does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.conf.hidden.list does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.io.sarg.cache.max.weight.mb does not exist 2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.clear.dangling.scratchdir.interval does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.druid.sleep.time does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.row.serde.deserialize does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.compile.lock.timeout does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.interval does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.variance does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.lrfu.lambda does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.druid.metadata.db.type does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.stream.timeout does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.transactional.events.mem does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.default.fetch.size does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.repl.cm.retain does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.merge.cardinality.check does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.groupClassKey does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.allow.permanent.fns does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.ssl does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.txn.manager.dump.lock.state.on.acquire.timeout does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.succeeded does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.fileid.path does not exist 2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.slice.row.count does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.hashtable.probe.percent does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.am.use.fqdn does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.min.timeout.ms does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.validate.acls does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.support.special.characters.tablename does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.mv.files.thread does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.skip.compile.udf.check does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.vector.serde.enabled does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.repl.cm.interval does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.server2.sleep.interval.between.start.attempts does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.container.mb does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.druid.http.read.timeout does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.blobstore.optimizations.enabled does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.orc.gap.cache does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.optimize.dynamic.partition.hashjoin does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.exec.copyfile.maxnumfiles does not exist 2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.formats does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.druid.http.numConnection does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.enable.preemption does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.executors does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.full does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.connection.class does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.tez.sessions.custom.queue.allowed does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.slice.lrr does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.password does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.writer.wait does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.request.header.size does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.webui.max.threads does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductiontuples does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.test.rollbacktxn does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.num.schedulable.tasks.per.node does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.acl does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.size does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.strict.checks.type.safety does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.async.exec.async.compile does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.input.size does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.tez.enable.memory.manager does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.msck.repair.batch.size does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.blobstore.supported.schemes does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.orc.splits.allow.synthetic.fileid does not exist 2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.stats.filter.in.factor does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.spark.use.op.stats does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.exec.input.listing.max.threads does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime.jitter does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.port does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.strict.checks.cartesian.product does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.num.handlers does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.vcpus.per.instance does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.count.open.txns.interval does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.min.bloom.filter.entries does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.optimize.partition.columns.separate does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.orc.cache.stripe.details.mem.size does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.txn.heartbeat.threadpool.size does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.locality.delay does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.repl.cmrootdir does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.disable.backoff.factor does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.spark.exec.inplace.progress does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.druid.working.directory does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve.fraction does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.merge.nway.joins does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.txn.strict.locking.mode does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.vector.serde.async.enabled does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.in.place.progress does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.memory.rownum.max does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.xsrf.filter.enabled does not exist 2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 2.4.4 /_/ Using Python version 3.7.4 (default, Sep 20 2019 17:49:03) SparkSession available as 'spark'. ```
Spark1.3基于scala2.11编译hive-thrift报错,关于jline的
[INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Spark Project Hive Thrift Server 1.3.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ spark-hive-thriftserver_2.11 --- [INFO] Deleting /usr/local/spark-1.3.0/sql/hive-thriftserver/target [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-versions) @ spark-hive-thriftserver_2.11 --- [INFO] [INFO] --- scala-maven-plugin:3.2.0:add-source (eclipse-add-source) @ spark-hive-thriftserver_2.11 --- [INFO] Add Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala [INFO] Add Test Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/test/scala [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-scala-sources) @ spark-hive-thriftserver_2.11 --- [INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala added. [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-default-sources) @ spark-hive-thriftserver_2.11 --- [INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/v0.13.1/src/main/scala added. [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ spark-hive-thriftserver_2.11 --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ spark-hive-thriftserver_2.11 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @ spark-hive-thriftserver_2.11 --- [WARNING] Zinc server is not available at port 3030 - reverting to normal incremental compile [INFO] Using incremental compilation [INFO] compiler plugin: BasicArtifact(org.scalamacros,paradise_2.11.2,2.0.1,null) [INFO] Compiling 9 Scala sources to /usr/local/spark-1.3.0/sql/hive-thriftserver/target/scala-2.11/classes... [ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:25: object ConsoleReader is not a member of package jline [ERROR] import jline.{ConsoleReader, History} [ERROR] ^ [WARNING] Class jline.Completor not found - continuing with a stub. [WARNING] Class jline.ConsoleReader not found - continuing with a stub. [ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:165: not found: type ConsoleReader [ERROR] val reader = new ConsoleReader() [ERROR] ^ [ERROR] Class jline.Completor not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] 6 warnings found [ERROR] three errors found [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Spark Project Parent POM ........................... SUCCESS [01:20 min] [INFO] Spark Project Networking ........................... SUCCESS [01:31 min] [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 47.808 s] [INFO] Spark Project Core ................................. SUCCESS [34:00 min] [INFO] Spark Project Bagel ................................ SUCCESS [03:21 min] [INFO] Spark Project GraphX ............................... SUCCESS [09:22 min] [INFO] Spark Project Streaming ............................ SUCCESS [15:07 min] [INFO] Spark Project Catalyst ............................. SUCCESS [14:35 min] [INFO] Spark Project SQL .................................. SUCCESS [16:31 min] [INFO] Spark Project ML Library ........................... SUCCESS [18:15 min] [INFO] Spark Project Tools ................................ SUCCESS [01:50 min] [INFO] Spark Project Hive ................................. SUCCESS [13:58 min] [INFO] Spark Project REPL ................................. SUCCESS [06:13 min] [INFO] Spark Project YARN ................................. SUCCESS [07:05 min] [INFO] Spark Project Hive Thrift Server ................... FAILURE [01:39 min] [INFO] Spark Project Assembly ............................. SKIPPED [INFO] Spark Project External Twitter ..................... SKIPPED [INFO] Spark Project External Flume Sink .................. SKIPPED [INFO] Spark Project External Flume ....................... SKIPPED [INFO] Spark Project External MQTT ........................ SKIPPED [INFO] Spark Project External ZeroMQ ...................... SKIPPED [INFO] Spark Project Examples ............................. SKIPPED [INFO] Spark Project YARN Shuffle Service ................. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:25 h [INFO] Finished at: 2015-04-16T14:11:24+08:00 [INFO] Final Memory: 62M/362M [INFO] ------------------------------------------------------------------------ [WARNING] The requested profile "hadoop-2.5" could not be activated because it does not exist. [ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (scala-compile-first) on project spark-hive-thriftserver_2.11: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile failed. CompileFailed -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :spark-hive-thriftserver_2.11
hive udf报错SemanticException [Error 10014]
**hive的udf怎么引入依赖呢,我通过下面方式添加了第三方依赖** ``` add file /home/chenxy/hive/GeoLite2-City.mmdb; add jar /home/chenxy/jars/spark-hive_2.11-2.1.0.jar; add jar /home/chenxy/jars/geoip2-2.12.0.jar; add jar /home/chenxy/jars/IpCity.jar; create temporary function ip2poi as 'com.tianzhuo.portrait.Ip2PoiUDF'; select ip2poi("183.128.104.19"); ``` select ip2poi("183.128.104.19");操作后错误日志如下: ``` FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '"183.128.104.19"': org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public java.lang.String com.tianzhuo.portrait.Ip2PoiUDF.evaluate(java.lang.String) on object com.tianzhuo.portrait.Ip2PoiUDF@23592946 of class com.tianzhuo.portrait.Ip2PoiUDF with arguments {183.128.104.19:java.lang.String} of size 1 ``` 如果evaluate函数直接返回字符串是没有问题的,加上geoip2-2.12.0.jar相关代码就会报错 求助大佬们,困扰我两天了,百度谷歌搜遍了,也没合适解决方案
急,求高手!!hive在向整合hbase的分区表插入数据时报错:Must specify table
hadoop版本:hadoop-2.7.12.7.1 hbase版本:hbase-1.1.2 hive版本:apache-hive-2.0.0-bin 本人是新手,在整合了hive和hbase之后,我在hive中创建了一张关联了hbase表的分区表,然后在向表插入数据的时候报错了,下面是插入语句和报错,求懂的人解答。 visited_in_hive是关联了hbase表,准备插入数据的表;hv_c_hb是hive中的另一张表(不是分区表),里面有数据,而且字段和visited_in_hive一样。 插入语句:insert into table visited_in_hive partition (datetime='2016-03-05') select * from hv_c_hb; 报错:WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = test2_20160308092429_26a93c7c-b945-4329-98ff-47a4067b579d Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: Must specify table name at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1106) at org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:268) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:432) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:138) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:158) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:101) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1840) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1584) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1361) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1184) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:778) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:717) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: Must specify table name at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1128) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1103) ... 37 more Caused by: java.lang.IllegalArgumentException: Must specify table name at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:188) at org.apache.hive.common.util.ReflectionUtil.setConf(ReflectionUtil.java:101) at org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:87) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:300) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:290) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1126) ... 38 more Job Submission failed with exception 'java.io.IOException(org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: Must specify table name)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask 它报的错是没有指定表名,但是语句里明明已经指定表名了啊。难道是版本问题????求高手解答。
hive select count(*) 报错,帮忙看看
select count(*) from db_hive.student; WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases. Query ID = root_20171012025616_0d673cbd-6c2b-421b-bffc-98e2c4d8d676 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1507801793754_0002, Tracking URL = http://192.168.163.130:8088/proxy/application_1507801793754_0002/ Kill Command = /usr/hadoop/bin/hadoop job -kill job_1507801793754_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2017-10-12 02:56:29,615 Stage-1 map = 0%, reduce = 0% 2017-10-12 02:56:36,331 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.15 sec 2017-10-12 02:56:42,700 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.64 sec MapReduce Total cumulative CPU time: 2 seconds 640 msec java.io.IOException: java.lang.OutOfMemoryError: PermGen space at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:338) at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:390) at org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:583) at org.apache.hadoop.mapreduce.Job$5.run(Job.java:680) at org.apache.hadoop.mapreduce.Job$5.run(Job.java:677) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692) at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:677) at org.apache.hadoop.mapred.JobClient$NetworkedJob.getTaskCompletionEvents(JobClient.java:349) at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.computeReducerTimeStatsPerJob(HadoopJobExecHelper.java:612) at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:570) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:424) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2182) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1838) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1525) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1236) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1226) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.lang.OutOfMemoryError: PermGen space at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2625) at java.lang.Class.getMethod0(Class.java:2866) at java.lang.Class.getMethod(Class.java:1676) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.getReturnProtoType(ProtobufRpcEngine.java:293) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:258) at com.sun.proxy.$Proxy72.getTaskAttemptCompletionEvents(Unknown Source) at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBClientImpl.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:324) at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:390) at org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:583) at org.apache.hadoop.mapreduce.Job$5.run(Job.java:680) at org.apache.hadoop.mapreduce.Job$5.run(Job.java:677) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692) Ended Job = job_1507801793754_0002 with exception 'java.io.IOException(java.lang.OutOfMemoryError: PermGen space)' FAILED: Hive Internal Error: java.lang.OutOfMemoryError(PermGen space) java.lang.OutOfMemoryError: PermGen space MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.64 sec HDFS Read: 7728 HDFS Write: 101 SUCCESS Total MapReduce CPU Time Spent: 2 seconds 640 msec AsyncLogger error handling event seq=105, value='[ERROR calling class org.apache.logging.log4j.core.async.RingBufferLogEvent.toString(): java.lang.NullPointerException]': java.lang.OutOfMemoryError: PermGen space Exception in thread "main" java.lang.OutOfMemoryError: PermGen space
win10下运行hadoop程序报错
在win10的eclipse上运行一段代码需要用到hadoop。一开始报错:Could not locate executable null\bin\winutils.exe in the Hadoop binaries 按照指导下载了winutils.exe 并且配置了环境变量。再运行程序又报错: Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: --------- at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171) at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:162) at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:160) at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:167) at org.apache.spark.sql.CarbonContext.<init>(CarbonContext.scala:34) at org.carbondata.examples.util.InitForExamples$.createCarbonContext(InitForExamples.scala:42) at org.carbondata.examples.CarbonExample$.main(CarbonExample.scala:26) at org.carbondata.examples.CarbonExample.main(CarbonExample.scala) Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: --------- at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:612) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) ... 8 more ``` 按照指导在win10 cmd窗口后运行 winutils.exe chmod 777 /tmp/hive 报如下错误: ChangeFileModeByMask error (3): ??????????? 哪位大神现身救助
hive真是垃圾,处理从hbase映射的hive外表时因hbase数据量巨大总是跑崩。
hadoop集群三节点 centos7.5系统 4核 16G内存 <br> hbase表大概有七千万条数据 <br> hive建外表映射hbase表 ``` create external table worked_data_o( key String,province String,city String,code String,acc_number String,tel String,wd_date String,rep_disorder String,overtime String,receipt String,descr String,exp_empid String,fault_1 String,fault_2 String,fault_type String,acs_way String, address String) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES("hbase.columns.mapping"=":key,data:province,data:city,data:code,data:account,data:tel,data: date,data:rep_disorder,data:overtime,data:receipt,data:emp_descr,data:emp_id,data:fault_name,data:fault_desc r,data:fault_type,data:acs_way,data:address") TBLPROPERTIES("hbase.table.name"="worked_data"); ``` 对hive外表进行数据处理放入新表 ``` create table customer_intention as ( select wd.acc_number as acc_number,if(wd.descr='客户不满' OR wd.descr='客户不配合',2,IF(wd.descr='客户不听解释' OR wd.descr='客户情绪激动',4,IF(wd.descr='客户有投诉意向',6,IF(wd.descr='客户有强烈投诉意向',8,1)))) AS mood From worked_data_o as wd); ``` 报错: ``` WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = zkpk_20190506095541_4af46dd8-bf98-4a0e-b0b4-d7ed2974ab7b Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1556796305355_0016, Tracking URL = http://master:8088/proxy/application_1556796305355_0016/ Kill Command = /home/zkpk/hadoop-2.7.2/bin/hadoop job -kill job_1556796305355_0016 Hadoop job information for Stage-1: number of mappers: 8; number of reducers: 1 2019-05-06 09:55:54,098 Stage-1 map = 0%, reduce = 0% 2019-05-06 09:56:54,965 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 149.84 sec 2019-05-06 09:57:55,903 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 212.38 sec 2019-05-06 09:58:56,456 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 09:59:57,079 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:00:57,760 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:01:58,363 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:02:58,965 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:03:59,598 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:05:00,223 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:06:00,843 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:07:01,478 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:08:02,204 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 266.3 sec 2019-05-06 10:09:02,871 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:10:02,902 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:11:03,408 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:12:04,032 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:13:04,624 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:14:05,247 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:15:05,844 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:16:06,468 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:17:07,159 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:18:07,762 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:19:08,330 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:20:08,556 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:21:09,202 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:22:09,765 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:23:10,416 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:24:10,989 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:25:11,574 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:26:12,113 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:27:12,720 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:28:13,333 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:29:13,919 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:30:14,194 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:31:14,707 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:32:15,303 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:33:15,933 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:34:16,503 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:35:17,051 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:36:17,718 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:37:18,231 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:38:18,866 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:39:19,420 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:40:19,987 Stage-1 map = 0%, reduce = 0% 2019-05-06 10:40:23,058 Stage-1 map = 100%, reduce = 100% MapReduce Total cumulative CPU time: 4 minutes 26 seconds 300 msec Ended Job = job_1556796305355_0016 with errors Error during job, obtaining debugging information... Examining task ID: task_1556796305355_0016_m_000006 (and more) from job job_1556796305355_0016 Examining task ID: task_1556796305355_0016_m_000002 (and more) from job job_1556796305355_0016 Examining task ID: task_1556796305355_0016_m_000000 (and more) from job job_1556796305355_0016 Task with the most failures(4): ----- Task ID: task_1556796305355_0016_m_000004 URL: http://master:8088/taskdetails.jsp?jobid=job_1556796305355_0016&tipid=task_1556796305355_0016_m_000004 ----- Diagnostic Messages for this Task: AttemptID:attempt_1556796305355_0016_m_000004_3 Timed out after 600 secs FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: Map: 8 Reduce: 1 Cumulative CPU: 266.3 sec HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 4 minutes 26 seconds 300 msec ``` 大佬们求解决方法
oozie安装,oozie-docs报错
[INFO] Unable to load parent project from a relative path: 1 problem was encountered while building the effective model [FATAL] Non-readable POM /home/oozie/oozie-4.2.0/../pom.xml: /home/oozie/oozie-4.2.0/../pom.xml (No such file or directory) @ for project at /home/oozie/oozie-4.2.0/../pom.xml for project at /home/oozie/oozie-4.2.0/../pom.xml [INFO] Parent project loaded from repository. [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Oozie Main ................................. SUCCESS [ 1.418 s] [INFO] Apache Oozie Hadoop Utils ......................... SUCCESS [ 0.931 s] [INFO] Apache Oozie Hadoop Distcp hadoop-1-4.2.0 ......... SUCCESS [ 0.114 s] [INFO] Apache Oozie Hadoop Auth hadoop-1-4.2.0 ........... SUCCESS [ 0.151 s] [INFO] Apache Oozie Hadoop Libs .......................... SUCCESS [ 0.022 s] [INFO] Apache Oozie Client ............................... SUCCESS [ 15.918 s] [INFO] Apache Oozie Share Lib Oozie ...................... SUCCESS [ 1.950 s] [INFO] Apache Oozie Share Lib HCatalog ................... SUCCESS [ 7.325 s] [INFO] Apache Oozie Share Lib Distcp ..................... SUCCESS [ 0.618 s] [INFO] Apache Oozie Core ................................. SUCCESS [ 36.660 s] [INFO] Apache Oozie Share Lib Streaming .................. SUCCESS [ 3.166 s] [INFO] Apache Oozie Share Lib Pig ........................ SUCCESS [ 4.054 s] [INFO] Apache Oozie Share Lib Hive ....................... SUCCESS [ 8.105 s] [INFO] Apache Oozie Share Lib Hive 2 ..................... SUCCESS [ 3.258 s] [INFO] Apache Oozie Share Lib Sqoop ...................... SUCCESS [ 1.792 s] [INFO] Apache Oozie Examples ............................. SUCCESS [ 3.329 s] [INFO] Apache Oozie Share Lib Spark ...................... SUCCESS [ 3.642 s] [INFO] Apache Oozie Share Lib ............................ SUCCESS [ 12.581 s] [INFO] Apache Oozie Docs ................................. FAILURE [ 16.097 s] [INFO] Apache Oozie WebApp ............................... SKIPPED [INFO] Apache Oozie Tools ................................ SKIPPED [INFO] Apache Oozie MiniOozie ............................ SKIPPED [INFO] Apache Oozie Distro ............................... SKIPPED [INFO] Apache Oozie ZooKeeper Security Tests ............. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:02 min [INFO] Finished at: 2015-11-30T17:12:19+08:00 [INFO] Final Memory: 192M/2008M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:2.0-beta-6:site (default) on project oozie-docs: Error parsing site descriptor: Unrecognised tag: 'html' (position: START_TAG seen <html>... @1:6) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :oozie-docs ERROR, Oozie distro creation failed
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
这30个CSS选择器,你必须熟记(上)
关注前端达人,与你共同进步CSS的魅力就是让我们前端工程师像设计师一样进行网页的设计,我们能轻而易举的改变颜色、布局、制作出漂亮的影音效果等等,我们只需要改几行代码,不需...
国产开源API网关项目进入Apache孵化器:APISIX
点击蓝色“程序猿DD”关注我回复“资源”获取独家整理的学习资料!近日,又有一个开源项目加入了这个Java开源界大名鼎鼎的Apache基金会,开始进行孵化器。项目名称:AP...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
编写Spring MVC控制器的14个技巧
本期目录 1.使用@Controller构造型 2.实现控制器接口 3.扩展AbstractController类 4.为处理程序方法指定URL映射 5.为处理程序方法指定HTTP请求方法 6.将请求参数映射到处理程序方法 7.返回模型和视图 8.将对象放入模型 9.处理程序方法中的重定向 10.处理表格提交和表格验证 11.处理文件上传 12.在控制器中自动装配业务类 ...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
求小姐姐抠图竟遭白眼?痛定思痛,我决定用 Python 自力更生!
点击蓝色“Python空间”关注我丫加个“星标”,每天一起快乐的学习大家好,我是 Rocky0429,一个刚恰完午饭,正在用刷网页浪费生命的蒟蒻...一堆堆无聊八卦信息的网页内容慢慢使我的双眼模糊,一个哈欠打出了三斤老泪,就在此时我看到了一张图片:是谁!是谁把我女朋友的照片放出来的!awsl!太好看了叭...等等,那个背景上的一堆鬼画符是什么鬼?!真是看不下去!叔叔婶婶能忍,隔壁老王的三姨妈的四表...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
【管理系统课程设计】美少女手把手教你后台管理
【文章后台管理系统】URL设计与建模分析+项目源码+运行界面 栏目管理、文章列表、用户管理、角色管理、权限管理模块(文章最后附有源码) 1. 这是一个什么系统? 1.1 学习后台管理系统的原因 随着时代的变迁,现如今各大云服务平台横空出世,市面上有许多如学生信息系统、图书阅读系统、停车场管理系统等的管理系统,而本人家里就有人在用烟草销售系统,直接在网上完成挑选、购买与提交收货点,方便又快捷。 试想,若没有烟草销售系统,本人家人想要购买烟草,还要独自前往药...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车?某胡同口的煎饼摊一年能卖出多少个煎饼?深圳有多少个产品经理?一辆公交车里能装下多少个乒乓球?一个正常成年人有多少根头发?这类估算问题,被称为费米问题,是以科学家费米命名的。为什么面试会问这种问题呢?这类问题能把两类人清楚地区分出来。一类是具有文科思维的人,擅长赞叹和模糊想象,它主要依靠的是人的第一反应和直觉,比如小孩...
碎片化的时代,如何学习
今天周末,和大家聊聊学习这件事情。 在如今这个社会,我们的时间被各类 APP 撕的粉碎。 刷知乎、刷微博、刷朋友圈; 看论坛、看博客、看公号; 等等形形色色的信息和知识获取方式一个都不错过。 貌似学了很多,但是却感觉没什么用。 要解决上面这些问题,首先要分清楚一点,什么是信息,什么是知识。 那什么是信息呢? 你一切听到的、看到的,都是信息,比如微博上的明星出轨、微信中的表情大战、抖音上的...
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
相关热词 c#选择结构应用基本算法 c# 收到udp包后回包 c#oracle 头文件 c# 序列化对象 自定义 c# tcp 心跳 c# ice连接服务端 c# md5 解密 c# 文字导航控件 c#注册dll文件 c#安装.net
立即提问