我把hive-site.xml放进spark/conf/里后报了一堆警告,怎么处理,不处理有影响吗?

之前配置的时候一直没发现忘记把hive-site.xml配置文件放到spark/conf中,今天把文件放进去,结果一打开pyspark就报一堆错,使用sparksql的时候也是报一堆警告,警告如下:
因为太长,所以先把想法写在这。我想知道怎么可以把这个提示的等级调高,或者怎么可以解决这些警告,麻烦大佬们帮忙看看,谢谢!

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.metastore.client.capability.check does not exist
2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist
2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.druid.broker.address.default does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve-fraction.min does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.orc.splits.ms.footer.cache.ppd.enabled does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.metastore.event.message.factory does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.hs2.user.access does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.druid.storage.storageDirectory does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.dynamic.semijoin.reduction.threshold does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.connect.retry.limit does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.xmx.headroom does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.dynamic.semijoin.reduction does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.direct does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.stats does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.client.consistent.splits does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.start does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.ttl does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.management.acl does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.delegation.token.lifetime does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.guidKey does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.ats.hook.queue.capacity does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.strict.checks.large.query does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.bigtable.minsize.semijoin.reduction does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.min does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.user does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.alloc.size does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.wait.queue.comparator.class.name does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.port does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.orc.cache.use.soft.references does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.enabled does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve.fraction.max does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.listener.thread-count does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.tez.container.max.java.heap.fraction does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.stats.column.autogather does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.am.liveness.heartbeat.interval.ms does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.decoding.metrics.percentiles.intervals does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.groupby.position.alias does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.metastore.txn.store.impl does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.spark.use.groupby.shuffle does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.object.cache.enabled does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.parallel.ops.in.session does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.groupby.limit.extrastep does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.ssl does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.location does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.delay.seconds does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.materializedview.fileformat does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.file.cleaner.threads does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.test.fail.compaction does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.blobstore.use.blobstore.as.scratchdir does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.service.metrics.class does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.mmap.path does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.download.permanent.fns does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.max.historic.queries does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.vectorized.execution.reducesink.new.enabled does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.compactor.max.num.delta does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.attempted does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.port does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.compactor.initiator.failed.compacts.threshold does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.service.metrics.reporter does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.max.pending.writes does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.execution.mode does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.enable.grace.join.in.llap does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.mode does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.io.threadpool.size does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.druid.select.threshold does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.scratchdir.lock does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.spnego does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.frequency does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.hs2.coordinator.enabled does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.timeout.seconds does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.filter.stats.reduction does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.exec.orc.base.delta.ratio does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.fastpath does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.clear.dangling.scratchdir does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.test.fail.heartbeater does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.file.cleanup.delay.seconds does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.management.rpc.port does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.mapjoin.hybridgrace.bloomfilter does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.tree does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.stats.ndv.tuner does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.query.length does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.failed does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.close.session.on.disconnect does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.ppd.windowing does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.initial.metadata.count.enabled does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.orc.splits.ms.footer.cache.enabled does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup.min does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.file.metadata.threads does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.service.refresh.interval.sec does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.output.size does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.driver.parallel.compilation does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.remote.token.requires.signing does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.cache.allow.synthetic.fileid does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.hash.table.inflation.factor does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.hbase.ttl does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.vectorized does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.writeset.reaper.interval does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.vector.serde.deserialize does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.order.columnalignment does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.send.buffer.size does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.exec.schema.evolution does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.elements.values.clause does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.server2.llap.concurrent.queries does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.allow.uber does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.partition.size.max does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.auth does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.orc.splits.include.fileid does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.communicator.num.threads does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.orderby.position.alias does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.sleep.between.retries.ms does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.partitions does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.service.metrics.hadoop2.component does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.shuffle.port does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.elements.in.clause does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.druid.passiveWaitTimeMs does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.load.dynamic.partitions.thread does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.segments.granularity does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.response.header.size does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.conf.internal.variable.list does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductionpercentage does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.repl.cm.enabled does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.limit does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.serialize.in.tasks does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.query.timeout.seconds does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.service.metrics.hadoop2.frequency does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.orc.splits.directory.batch.ms does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.reader.wait does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.max.timeout.ms does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.max.open.txns does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.reduce.side does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.server2.zookeeper.publish.configs does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.auto.convert.join.hashtable.max.entries does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.server2.tez.sessions.init.threads does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.metastore.authorization.storage.check.externaltable.drop does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.execution.mode does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.cbo.cnf.maxnodes does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.vectorized.adaptor.usage.mode does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.materializedview.rewriting does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.groupMembershipKey does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.catalog.cache.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.cbo.show.warnings does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.fshandler.threads does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.max.bloom.filter.entries does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.metadata.fraction does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.materializedview.serde does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.wait.queue.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.cache.entries does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.txn.operational.properties does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.memory.ttl does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.port does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.nonvector.wrapper.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.cache.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.vectorized.input.format does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.optimize.cte.materialize.threshold does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.clean.until does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.optimize.semijoin.conversion does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.port does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.metrics.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.repl.rootdir does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.limit.partition.request does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.async.log.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.logger does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.allow.udf.load.on.demand does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.cli.tez.session.async does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.bloom.filter.factor does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.am-reporter.max.threads does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.use.file.size.for.mapjoin does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning.compat does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.webui.spnego.principal does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.preemption.metrics.intervals does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.shuffle.dir.watcher.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.arena.count does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.use.SSL does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.timeout.ms does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.transpose.aggr.join does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.maxTries does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning.max.data.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.metadata.base does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.invalidator.frequency does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.lrfu does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.mmap does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.coordinator.address.default does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.max.fetch.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.conf.hidden.list does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.io.sarg.cache.max.weight.mb does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.clear.dangling.scratchdir.interval does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.druid.sleep.time does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.row.serde.deserialize does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.compile.lock.timeout does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.interval does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.variance does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.lrfu.lambda does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.druid.metadata.db.type does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.stream.timeout does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.transactional.events.mem does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.default.fetch.size does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.repl.cm.retain does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.merge.cardinality.check does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.groupClassKey does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.allow.permanent.fns does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.ssl does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.txn.manager.dump.lock.state.on.acquire.timeout does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.succeeded does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.fileid.path does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.slice.row.count does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.hashtable.probe.percent does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.am.use.fqdn does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.min.timeout.ms does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.validate.acls does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.support.special.characters.tablename does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.mv.files.thread does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.skip.compile.udf.check does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.vector.serde.enabled does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.repl.cm.interval does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.server2.sleep.interval.between.start.attempts does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.container.mb does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.druid.http.read.timeout does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.blobstore.optimizations.enabled does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.orc.gap.cache does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.optimize.dynamic.partition.hashjoin does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.exec.copyfile.maxnumfiles does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.formats does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.druid.http.numConnection does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.enable.preemption does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.executors does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.full does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.connection.class does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.tez.sessions.custom.queue.allowed does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.slice.lrr does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.password does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.writer.wait does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.request.header.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.webui.max.threads does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductiontuples does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.test.rollbacktxn does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.num.schedulable.tasks.per.node does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.acl does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.strict.checks.type.safety does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.async.exec.async.compile does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.input.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.tez.enable.memory.manager does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.msck.repair.batch.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.blobstore.supported.schemes does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.orc.splits.allow.synthetic.fileid does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.stats.filter.in.factor does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.spark.use.op.stats does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.exec.input.listing.max.threads does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime.jitter does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.port does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.strict.checks.cartesian.product does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.num.handlers does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.vcpus.per.instance does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.count.open.txns.interval does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.min.bloom.filter.entries does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.optimize.partition.columns.separate does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.orc.cache.stripe.details.mem.size does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.txn.heartbeat.threadpool.size does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.locality.delay does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.repl.cmrootdir does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.disable.backoff.factor does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.spark.exec.inplace.progress does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.druid.working.directory does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve.fraction does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.merge.nway.joins does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.txn.strict.locking.mode does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.vector.serde.async.enabled does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.in.place.progress does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.memory.rownum.max does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.xsrf.filter.enabled does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.4
      /_/

Using Python version 3.7.4 (default, Sep 20 2019 17:49:03)
SparkSession available as 'spark'.

1个回答

qq_45888663
qq_45888663 大神快来回答,我也有同样问题
3 个月之前 回复
pycrossover
铲子挖掘数据 额。。。我已经将hive-site.xml放到spark/conf目录下了,才出现以上的问题。
3 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState'
idea中使用spark-sql报错,事先说明一下,我已经将三个配置文件core-site.xml、hdfs-site.xml、hive-site.xml拷贝到resources下面,可以连接到metastore。我在网上看了很多解决方法,我都做了修改,但是都为生效。 我已经做过的事如下: ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356414_188554.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356355_466558.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356390_666077.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356428_729364.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356441_976555.png) 错误如下: ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565356461_588231.png)
Eclipse编译Hive0.9.0源码时出错
我在用Eclipse编译Hive0.9.0时,编译通不过,如下所示: Buildfile: /home/cdl/branch-0.9/build.xml ivy-init-dirs: [echo] Project: hive [mkdir] Created dir: /home/cdl/branch-0.9/build/ivy [mkdir] Created dir: /home/cdl/branch-0.9/build/ivy/lib [mkdir] Created dir: /home/cdl/branch-0.9/build/ivy/report [mkdir] Created dir: /home/cdl/branch-0.9/build/ivy/maven ivy-download: [echo] Project: hive [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /home/cdl/branch-0.9/build/ivy/lib/ivy-2.1.0.jar ivy-probe-antlib: [echo] Project: hive ivy-init-antlib: [echo] Project: hive compile-ant-tasks: [echo] Project: hive create-dirs: [echo] Project: anttasks [mkdir] Created dir: /home/cdl/branch-0.9/build/anttasks [mkdir] Created dir: /home/cdl/branch-0.9/build/anttasks/classes [mkdir] Created dir: /home/cdl/branch-0.9/build/jexl/classes [mkdir] Created dir: /home/cdl/branch-0.9/build/hadoopcore [mkdir] Created dir: /home/cdl/branch-0.9/build/anttasks/test [mkdir] Created dir: /home/cdl/branch-0.9/build/anttasks/test/src [mkdir] Created dir: /home/cdl/branch-0.9/build/anttasks/test/classes [mkdir] Created dir: /home/cdl/branch-0.9/build/anttasks/test/resources [copy] Warning: /home/cdl/branch-0.9/ant/src/test/resources does not exist. init: [echo] Project: anttasks ivy-init-settings: [echo] Project: anttasks ivy-resolve: [echo] Project: anttasks _BUILD FAILED /home/cdl/branch-0.9/build.xml:256: The following error occurred while executing this line: /home/cdl/branch-0.9/build-common.xml:132: java.lang.ClassCastException: org.eclipse.osgi.internal.framework.EquinoxConfiguration$1 cannot be cast to java.lang.String_ Total time: 43 seconds 出错对应的位置文件内容为: ![build.xml:256处](https://img-ask.csdn.net/upload/201510/21/1445418607_379472.jpg) ![build-common.xml:132处](https://img-ask.csdn.net/upload/201510/21/1445418684_165695.jpg) 我百度了很久,都无法解决。求各位大神帮帮忙!
Spark1.3基于scala2.11编译hive-thrift报错,关于jline的
[INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Spark Project Hive Thrift Server 1.3.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ spark-hive-thriftserver_2.11 --- [INFO] Deleting /usr/local/spark-1.3.0/sql/hive-thriftserver/target [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-versions) @ spark-hive-thriftserver_2.11 --- [INFO] [INFO] --- scala-maven-plugin:3.2.0:add-source (eclipse-add-source) @ spark-hive-thriftserver_2.11 --- [INFO] Add Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala [INFO] Add Test Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/test/scala [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-scala-sources) @ spark-hive-thriftserver_2.11 --- [INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala added. [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-default-sources) @ spark-hive-thriftserver_2.11 --- [INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/v0.13.1/src/main/scala added. [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ spark-hive-thriftserver_2.11 --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ spark-hive-thriftserver_2.11 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @ spark-hive-thriftserver_2.11 --- [WARNING] Zinc server is not available at port 3030 - reverting to normal incremental compile [INFO] Using incremental compilation [INFO] compiler plugin: BasicArtifact(org.scalamacros,paradise_2.11.2,2.0.1,null) [INFO] Compiling 9 Scala sources to /usr/local/spark-1.3.0/sql/hive-thriftserver/target/scala-2.11/classes... [ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:25: object ConsoleReader is not a member of package jline [ERROR] import jline.{ConsoleReader, History} [ERROR] ^ [WARNING] Class jline.Completor not found - continuing with a stub. [WARNING] Class jline.ConsoleReader not found - continuing with a stub. [ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:165: not found: type ConsoleReader [ERROR] val reader = new ConsoleReader() [ERROR] ^ [ERROR] Class jline.Completor not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] Class com.google.protobuf.Parser not found - continuing with a stub. [WARNING] 6 warnings found [ERROR] three errors found [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Spark Project Parent POM ........................... SUCCESS [01:20 min] [INFO] Spark Project Networking ........................... SUCCESS [01:31 min] [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 47.808 s] [INFO] Spark Project Core ................................. SUCCESS [34:00 min] [INFO] Spark Project Bagel ................................ SUCCESS [03:21 min] [INFO] Spark Project GraphX ............................... SUCCESS [09:22 min] [INFO] Spark Project Streaming ............................ SUCCESS [15:07 min] [INFO] Spark Project Catalyst ............................. SUCCESS [14:35 min] [INFO] Spark Project SQL .................................. SUCCESS [16:31 min] [INFO] Spark Project ML Library ........................... SUCCESS [18:15 min] [INFO] Spark Project Tools ................................ SUCCESS [01:50 min] [INFO] Spark Project Hive ................................. SUCCESS [13:58 min] [INFO] Spark Project REPL ................................. SUCCESS [06:13 min] [INFO] Spark Project YARN ................................. SUCCESS [07:05 min] [INFO] Spark Project Hive Thrift Server ................... FAILURE [01:39 min] [INFO] Spark Project Assembly ............................. SKIPPED [INFO] Spark Project External Twitter ..................... SKIPPED [INFO] Spark Project External Flume Sink .................. SKIPPED [INFO] Spark Project External Flume ....................... SKIPPED [INFO] Spark Project External MQTT ........................ SKIPPED [INFO] Spark Project External ZeroMQ ...................... SKIPPED [INFO] Spark Project Examples ............................. SKIPPED [INFO] Spark Project YARN Shuffle Service ................. SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:25 h [INFO] Finished at: 2015-04-16T14:11:24+08:00 [INFO] Final Memory: 62M/362M [INFO] ------------------------------------------------------------------------ [WARNING] The requested profile "hadoop-2.5" could not be activated because it does not exist. [ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (scala-compile-first) on project spark-hive-thriftserver_2.11: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile failed. CompileFailed -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :spark-hive-thriftserver_2.11
Presto调用Hive启动时显示报错/usr/bin/env: 'python': No such file or directory
Presto的配置如下: 节点属性:node.properties ``` node.environment=production node.id=ffffffff-ffff-ffff-ffff-ffffffffffff node.data-dir=/var/presto/data ``` 配置属性:config.properties ``` coordinator=true node-scheduler.include-coordinator=true http-server.http.port=8080 query.max-memory=5GB query.max-memory-per-node=1GB discovery-server.enabled=true discovery.uri=http://hadoop1:8080 ``` JVM Config ```-server -Xmx16G -XX:+UseG1GC -XX:G1HeapRegionSize=32M -XX:+UseGCOverheadLimit -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError ``` 连接Hive的配置 ``` connector.name=hive-hadoop2 hive.metastore.uri=thrift://hadoop1:9083 hive.config.resources=/usr/local/hadoop/etc/hadoop/core-site.xml,/usr/local/hadoop/etc/hadoop/hdfs-site.xml ``` 以上就是在presto的所有配置 启动时就显示报错 ``` root@hadoop1:/usr/local/presto-server-0.225# bin/launcher start /usr/bin/env: 'python': No such file or directory ``` 但是我的系统是有python的 ``` root@hadoop1:# python3 -V Python 3.6.8 ``` 请问,这个如何解决 新手求教,求助各位大佬们,拜托了。
ubuntu hive 启动报错 见日志
2018-11-26T04:21:57,558 WARN [main] common.LogUtils: DEPRECATED: Ignoring hive-default.xml found on the CLASSPATH at /home/wj/hive-2.1.1/conf/hive-default.xml 2018-11-26T04:21:58,439 INFO [main] SessionState: Logging initialized using configuration in jar:file:/home/wj/hive-2.1.1/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true 2018-11-26T04:21:59,590 INFO [main] metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore 2018-11-26T04:21:59,707 INFO [main] metastore.ObjectStore: ObjectStore, initialize called 2018-11-26T04:22:03,826 INFO [main] metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2018-11-26T04:22:09,855 WARN [main] metastore.MetaStoreDirectSql: Self-test query [select "DB_ID" from "DBS"] failed; direct SQL is disabled javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"".
linux执行update-alternatives命令报错
+ /usr/sbin/update-alternatives --install /etc/hive/conf hive-conf /etc/hive/conf.cloudera.hive 90 altdir /etc/alternatives invalid 这应该怎么解决 redhat7系统
运行hive-jdbc的代码,关于insert/values,为什么只是第一次能运行,第二次运行就报错?
问题:在hive1.1.0的shell界面不支持的insert/values语法为什么在hive-jdbc代码中能运行?而且为什么只是第一次能运行,第二次运行就报错? 描述: 在hive1.1.0的shell界面不支持的insert/values语法: ![图片说明](https://img-ask.csdn.net/upload/201903/11/1552294834_736720.png) 为什么在hive-jdbc代码中只有第一次能运行: ![图片说明](https://img-ask.csdn.net/upload/201903/11/1552294947_619465.png) ======= ![图片说明](https://img-ask.csdn.net/upload/201903/11/1552294874_478366.png) 而紧接着第二次执行又说sql语法错误? ![图片说明](https://img-ask.csdn.net/upload/201903/11/1552295708_930275.png) ps:67行就对应那句sql的执行
[hive] hive on spark hive.exec.reducers.bytes.per.reducer参数值和实际数据量不一样
hive on spark 在运行sql时,想动态控制reduce的数据,就设置了set hive.exec.reducers.bytes.per.reducer = 256000000; 但是发觉reduce变成了1个,实际数据有大概2g左右。 后来把set hive.exec.reducers.bytes.per.reducer = 32000000; 发觉reduce变成了7个 ![图片说明](https://img-ask.csdn.net/upload/201911/28/1574948343_708239.png) 切换成 hive on mr时,set hive.exec.reducers.bytes.per.reducer = 256000000又有用了 求助~ 另外发现 on mr合并小文件的参数在 on spark中设置的大小都没效果?
hive Communications link failure
在hive链接mysql作为数据源的过程中,配置conf下的hive-site.xml 文件, hive是装在Hadoop集群的master上,ip地址是192.168.1.154. mysql直接使用的sudo apt-get install mysql-server 安装的。 使用netstat -nat 显示: 结果如下:![图片说明](https://img-ask.csdn.net/upload/201504/27/1430139457_697441.png) 然后我的hive 配置文件是:![图片说明](https://img-ask.csdn.net/upload/201504/27/1430139550_239328.jpg) 把ip地址换成localhost或者127.0.0.1 hive下 show tables;正确,换成本机ip 就是错的。就会出现:![图片说明](https://img-ask.csdn.net/upload/201504/27/1430139661_682999.png) 怎么解决一下?
Azkaban和Hadoop2.5.1集成出现的问题
Using Hadoop from /usr/local/hadoop-suite/hadoop Using Hive from /usr/local/hadoop-suite/hive bin/.. /usr/local/jdk/lib/tools.jar:/usr/local/jdk/lib/dt.jar:bin/../lib/azkaban-common-2.6.4.jar:bin/../lib/azkaban-webserver-2.6.4.jar:bin/../lib/commons-codec-1.9.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-configuration-1.8.jar:bin/../lib/commons-dbcp-1.4.jar:bin/../lib/commons-dbutils-1.5.jar:bin/../lib/commons-email-1.2.jar:bin/../lib/commons-fileupload-1.2.1.jar:bin/../lib/commons-io-2.4.jar:bin/../lib/commons-jexl-2.1.1.jar:bin/../lib/commons-lang-2.6.jar:bin/../lib/commons-logging-1.1.1.jar:bin/../lib/commons-pool-1.6.jar:bin/../lib/data-1.15.7.jar:bin/../lib/gradle-plugins-1.15.7.jar:bin/../lib/guava-13.0.1.jar:bin/../lib/h2-1.3.170.jar:bin/../lib/httpclient-4.2.1.jar:bin/../lib/httpcore-4.2.1.jar:bin/../lib/jackson-core-2.3.2.jar:bin/../lib/jackson-core-asl-1.9.5.jar:bin/../lib/jackson-mapper-asl-1.9.5.jar:bin/../lib/jetty-6.1.26.jar:bin/../lib/jetty-util-6.1.26.jar:bin/../lib/joda-time-2.0.jar:bin/../lib/jopt-simple-4.3.jar:bin/../lib/li-jersey-uri-1.15.7.jar:bin/../lib/log4j-1.2.16.jar:bin/../lib/mail-1.4.5.jar:bin/../lib/mysql-connector-java-5.1.28.jar:bin/../lib/parseq-1.3.7.jar:bin/../lib/pegasus-common-1.15.7.jar:bin/../lib/r2-1.15.7.jar:bin/../lib/restli-common-1.15.7.jar:bin/../lib/restli-server-1.15.7.jar:bin/../lib/servlet-api-2.5.jar:bin/../lib/slf4j-api-1.6.1.jar:bin/../lib/velocity-1.7.jar:bin/../lib/velocity-tools-2.0.jar:bin/../extlib/azkaban-common-2.6.4.jar:bin/../extlib/azkaban-execserver-2.6.4.jar:bin/../extlib/azkaban-webserver-2.6.4.jar:bin/../extlib/commons-cli-1.2.jar:bin/../extlib/hadoop-auth-2.5.1.jar:bin/../extlib/hadoop-common-2.5.1.jar:bin/../extlib/hadoop-hdfs-2.5.1.jar:bin/../extlib/hive-cli-0.13.1.jar:bin/../extlib/hive-common-0.13.1.jar:bin/../extlib/hive-exec-0.13.1.jar:bin/../extlib/jackson-core-asl-1.9.5.jar:bin/../extlib/jackson-mapper-asl-1.9.5.jar:bin/../extlib/log4j-1.2.16.jar:bin/../extlib/protobuf-java-2.5.0.jar:bin/../extlib/servlet-api-2.5.jar:bin/../extlib/slf4j-api-1.6.1.jar:bin/../extlib/slf4j-log4j12-1.6.4.jar:bin/../extlib/velocity-1.7.jar:bin/../extlib/velocity-tools-2.0.jar:bin/../plugins/*/*.jar:/usr/local/hadoop-suite/hadoop/conf:/usr/local/hadoop-suite/hadoop/*:/usr/local/hadoop-suite/hive/conf:/usr/local/hadoop-suite/hive/lib/* 2015/01/21 16:02:33.518 +0800 ERROR [AzkabanWebServer] [Azkaban] Starting Jetty Azkaban Executor... 2015/01/21 16:02:33.937 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.hdfs.HdfsBrowserServlet 2015/01/21 16:02:33.941 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/hdfs/lib/azkaban-hdfs-viewer-2.6.4.jar 2015/01/21 16:02:33.945 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.javaviewer.JavaViewerServlet 2015/01/21 16:02:33.946 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/javaviewer/lib/azkaban-javaviewer-2.6.3.jar 2015/01/21 16:02:33.947 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.reportal.ReportalServlet 2015/01/21 16:02:33.947 +0800 ERROR [AzkabanWebServer] [Azkaban] External library path /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/extlib not found. 2015/01/21 16:02:33.950 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/lib/azkaban-reportal-$%7Bgit.tag%7D.jar Reportal web resources: /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/web 2015/01/21 16:02:33.953 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.jobsummary.JobSummaryServlet 2015/01/21 16:02:33.953 +0800 ERROR [AzkabanWebServer] [Azkaban] External library path /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/jobsummary/extlib/* not found.
hive 0.13.1元数据库无法变成mysql
我把hive-0.12升级成0.13.1版本,先在mysql里执行了source upgrade-0.12.0-to-0.13.0.mysql.sql成功了,然后在mysql中创建了一个hivenew(0.12版的是hive)用户,并给予了权限,更改了hive-site.xml文件如下: 1. <property> 2. <name>hive.stats.dbclass</name> 3. <value>jdbc:mysql</value> 4. <description>The default database that stores temporary hive statistics.</description> 5. </property> 6. 7. <property> 8. <name>hive.stats.jdbcdriver</name> 9. <value>com.mysql.jdbc.Driver</value> 10. <description>The JDBC driver for the database that stores temporary hive statistics.</description> 11. </property> 12. 13. <property> 14. <name>hive.stats.dbconnectionstring</name> 15. <value>jdbc:mysql://localhost:3306/hivenew</value> 16. <description>The default connection string for the database that stores temporary hive statistics.</description> 17. </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hivenew?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hivenew</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hivenew</value> <description>password to use against metastore database</description> </property> 保存后,把mysql的jdbc复制到了lib下,然后启动hive,虽然能正常显示hive>,和show tables; 但是在mysql里根本没有hivenew这个数据库,我装0.12时装好后自动就有了hive数据库,而show tables,也没有我在hivenew下创建的表格,而且经常出现Caused by: ERROR XSDB6: Another instance of Derby may have already booted the database /opt/apache-hive-0.13.1-bin/metastore_db.这样的错误 可见元数据库仍旧是derby,更诡异的是我把hive-site.xml删掉之后,hive它仍旧工作良好能正常显示hive>,和show tables,难道hive-site.xml的配置都无关紧要吗?求求大家帮帮我,急死人了
hive访问不到sdb中的数据
【版本信息】 hive 0.10.0 hadoop 2.6.0 java 8 【问题详细描述】 当在hive的shell中查询sdb中的数据的时候报错: hive> select * from sdb_tab11; OK Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.ColumnProjectionUtils.getReadColumnIDs(Lorg/apache/hadoop/conf/Configuration;)Ljava/util/List; at com.sequoiadb.hive.SdbHiveInputFormat.getRecordReader(SdbHiveInputFormat.java:35) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:410) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:486) at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:466) at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136) at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1387) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:270) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 复制代码 其中 sdb_tab11这个表中是有数据的,在sdb中能够查询到,已将 SequoiaDB目录下的 hadoop/hive-sequoiadb-apache.jar 和 java/sequoiadb.jar拷贝到 hive/lib 安装目录下; 这是hive-site.xml <property> <name>hive.aux.jars.path</name> <value>file:///usr/local/hive/hive-0.10.0/lib/hive-sequoiadb-apache.jar,file:///usr/local/hive/hive-0.10.0/lib/sequoiadb.jar</value> <description>Sequoiadb store handler jar file</description> </property> <property> <name> hive.auto.convert.join</name> <value>false</value> </property> 复制代码 其中这个jar文件在hdfs中也有,路径一样
Sqoop将数据从hive导入mysql报错,各位帮我看看
这是运行的命令: liuyanbing@ubuntu:/opt/sqoop$ bin/sqoop export --connect jdbc:mysql://localhost:3306/dbtaobao --username root --password root --table user_log --export-dir '/user/hive/warehouse/dbtaobao.db/inner_user_log' --fields-terminated-by ','; 报错内容: Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 2019-06-11 16:05:04,541 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 2019-06-11 16:05:04,573 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 2019-06-11 16:05:04,678 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 2019-06-11 16:05:04,678 INFO tool.CodeGenTool: Beginning code generation Tue Jun 11 16:05:04 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2019-06-11 16:05:05,241 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,379 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user_log` AS t LIMIT 1 2019-06-11 16:05:05,392 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /bigdata/hadoop-3.1.1 Note: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 2019-06-11 16:05:09,951 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-liuyanbing/compile/990c7e516f6811ff0f7c264686938932/user_log.jar 2019-06-11 16:05:09,960 INFO mapreduce.ExportJobBase: Beginning export of user_log 2019-06-11 16:05:09,960 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2019-06-11 16:05:10,093 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-06-11 16:05:10,131 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 2019-06-11 16:05:11,220 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 2019-06-11 16:05:11,224 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:11,225 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 2019-06-11 16:05:11,399 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032 2019-06-11 16:05:12,478 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liuyanbing/.staging/job_1560238973821_0003 2019-06-11 16:05:15,272 WARN hdfs.DataStreamer: Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:986) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:810) 2019-06-11 16:05:18,771 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:18,780 INFO input.FileInputFormat: Total input files to process : 1 2019-06-11 16:05:19,285 INFO mapreduce.JobSubmitter: number of splits:4 2019-06-11 16:05:19,352 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 2019-06-11 16:05:19,353 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled 2019-06-11 16:05:19,472 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1560238973821_0003 2019-06-11 16:05:19,473 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2019-06-11 16:05:19,959 INFO conf.Configuration: resource-types.xml not found 2019-06-11 16:05:19,959 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2019-06-11 16:05:20,049 INFO impl.YarnClientImpl: Submitted application application_1560238973821_0003 2019-06-11 16:05:20,105 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1560238973821_0003/ 2019-06-11 16:05:20,106 INFO mapreduce.Job: Running job: job_1560238973821_0003 2019-06-11 16:05:29,273 INFO mapreduce.Job: Job job_1560238973821_0003 running in uber mode : false 2019-06-11 16:05:29,286 INFO mapreduce.Job: map 0% reduce 0% 2019-06-11 16:05:42,450 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22666,containerID=container_1560238973821_0003_01_000004] is running 318323200B beyond the 'VIRTUAL' memory limit. Current usage: 125.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000004 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22910 22666 22666 22666 (java) 302 45 2558558208 31405 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 |- 22666 22656 22666 22666 (bash) 0 0 14622720 634 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_0 4 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000004/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.619]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,479 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22651,containerID=container_1560238973821_0003_01_000003] is running 320690688B beyond the 'VIRTUAL' memory limit. Current usage: 127.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000003 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22955 22651 22651 22651 (java) 296 49 2560925696 32025 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 |- 22651 22649 22651 22651 (bash) 0 0 14622720 627 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_0 3 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000003/stderr [2019-06-11 16:05:40.618]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.621]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,480 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_0, Status : FAILED [2019-06-11 16:05:38.617]Container [pid=22749,containerID=container_1560238973821_0003_01_000005] is running 320125440B beyond the 'VIRTUAL' memory limit. Current usage: 126.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22987 22749 22749 22749 (java) 324 37 2560360448 31709 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 |- 22749 22720 22749 22749 (bash) 0 1 14622720 640 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_0 5 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000005/stderr [2019-06-11 16:05:40.620]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:42,482 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_0, Status : FAILED [2019-06-11 16:05:39.558]Container [pid=22675,containerID=container_1560238973821_0003_01_000002] is running 319543808B beyond the 'VIRTUAL' memory limit. Current usage: 125.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000002 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 22937 22675 22675 22675 (java) 316 38 2559778816 31497 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 |- 22675 22670 22675 22675 (bash) 0 0 14622720 612 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_0 2 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000002/stderr [2019-06-11 16:05:40.619]Container killed on request. Exit code is 143 [2019-06-11 16:05:40.622]Container exited with a non-zero exit code 143. 2019-06-11 16:05:52,546 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_1, Status : FAILED [2019-06-11 16:05:50.910]Container [pid=23116,containerID=container_1560238973821_0003_01_000006] is running 282286592B beyond the 'VIRTUAL' memory limit. Current usage: 68.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000006 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23194 23116 23116 23116 (java) 85 29 2522521600 16852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 |- 23116 23115 23116 23116 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_1 6 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000006/stderr [2019-06-11 16:05:50.970]Container killed on request. Exit code is 143 [2019-06-11 16:05:51.012]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,561 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_1, Status : FAILED [2019-06-11 16:05:54.193]Container [pid=23396,containerID=container_1560238973821_0003_01_000009] is running 313866752B beyond the 'VIRTUAL' memory limit. Current usage: 111.1 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000009 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23396 23394 23396 23396 (bash) 0 1 14622720 710 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009/stderr |- 23473 23396 23396 23396 (java) 245 40 2554101760 27743 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000009/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000009 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_1 9 [2019-06-11 16:05:54.228]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.263]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,563 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_1, Status : FAILED [2019-06-11 16:05:54.332]Container [pid=23304,containerID=container_1560238973821_0003_01_000008] is running 314042880B beyond the 'VIRTUAL' memory limit. Current usage: 113.8 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000008 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23381 23304 23304 23304 (java) 265 51 2554277888 28423 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 |- 23304 23302 23304 23304 (bash) 0 1 14622720 720 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_1 8 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000008/stderr [2019-06-11 16:05:54.353]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.381]Container exited with a non-zero exit code 143. 2019-06-11 16:05:55,565 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_1, Status : FAILED [2019-06-11 16:05:54.408]Container [pid=23200,containerID=container_1560238973821_0003_01_000007] is running 314497536B beyond the 'VIRTUAL' memory limit. Current usage: 115.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000007 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23200 23198 23200 23200 (bash) 0 1 14622720 711 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007/stderr |- 23277 23200 23200 23200 (java) 257 60 2554732544 28852 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_1 7 [2019-06-11 16:05:54.463]Container killed on request. Exit code is 143 [2019-06-11 16:05:54.482]Container exited with a non-zero exit code 143. 2019-06-11 16:06:01,619 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000002_2, Status : FAILED [2019-06-11 16:06:00.584]Container [pid=23515,containerID=container_1560238973821_0003_01_000011] is running 337451520B beyond the 'VIRTUAL' memory limit. Current usage: 203.4 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000011 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23515 23513 23515 23515 (bash) 0 1 14622720 712 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011/stderr |- 23592 23515 23515 23515 (java) 456 89 2577686528 51352 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000002_2 11 [2019-06-11 16:06:00.602]Container killed on request. Exit code is 143 [2019-06-11 16:06:00.659]Container exited with a non-zero exit code 143. 2019-06-11 16:06:05,651 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000000_2, Status : FAILED [2019-06-11 16:06:03.816]Container [pid=23651,containerID=container_1560238973821_0003_01_000012] is running 331475456B beyond the 'VIRTUAL' memory limit. Current usage: 173.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000012 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23728 23651 23651 23651 (java) 418 39 2571710464 43768 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 |- 23651 23649 23651 23651 (bash) 0 1 14622720 707 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000012/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000000_2 12 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000012/stderr [2019-06-11 16:06:03.981]Container killed on request. Exit code is 143 [2019-06-11 16:06:03.986]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,677 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000001_2, Status : FAILED [2019-06-11 16:06:07.127]Container [pid=23848,containerID=container_1560238973821_0003_01_000014] is running 335940096B beyond the 'VIRTUAL' memory limit. Current usage: 198.2 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000014 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23848 23847 23848 23848 (bash) 0 1 14622720 714 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014/stderr |- 23926 23848 23848 23848 (java) 408 59 2576175104 50032 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000014/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000014 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000001_2 14 [2019-06-11 16:06:07.186]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.201]Container exited with a non-zero exit code 143. 2019-06-11 16:06:08,678 INFO mapreduce.Job: Task Id : attempt_1560238973821_0003_m_000003_2, Status : FAILED [2019-06-11 16:06:07.227]Container [pid=23751,containerID=container_1560238973821_0003_01_000013] is running 337357312B beyond the 'VIRTUAL' memory limit. Current usage: 192.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1560238973821_0003_01_000013 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 23829 23751 23751 23751 (java) 463 52 2577592320 48632 /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 |- 23751 23749 23751 23751 (bash) 0 1 14622720 706 /bin/bash -c /opt/java/jdk1.8.0_181/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/bigdata/hadoop-3.1.1/tmp/nm-local-dir/usercache/liuyanbing/appcache/application_1560238973821_0003/container_1560238973821_0003_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.1.1 43655 attempt_1560238973821_0003_m_000003_2 13 1>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stdout 2>/bigdata/hadoop-3.1.1/logs/userlogs/application_1560238973821_0003/container_1560238973821_0003_01_000013/stderr [2019-06-11 16:06:07.280]Container killed on request. Exit code is 143 [2019-06-11 16:06:07.360]Container exited with a non-zero exit code 143. 2019-06-11 16:06:12,703 INFO mapreduce.Job: map 100% reduce 0% 2019-06-11 16:06:12,711 INFO mapreduce.Job: Job job_1560238973821_0003 failed with state FAILED due to: Task failed task_1560238973821_0003_m_000002 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 2019-06-11 16:06:12,979 INFO mapreduce.Job: Counters: 13 Job Counters Failed map tasks=13 Killed map tasks=3 Launched map tasks=16 Other local map tasks=12 Data-local map tasks=4 Total time spent by all maps in occupied slots (ms)=124936 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=124936 Total vcore-milliseconds taken by all map tasks=124936 Total megabyte-milliseconds taken by all map tasks=127934464 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 2019-06-11 16:06:12,986 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 2019-06-11 16:06:12,990 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 61.7517 seconds (0 bytes/sec) 2019-06-11 16:06:12,999 INFO mapreduce.ExportJobBase: Exported 0 records. 2019-06-11 16:06:12,999 ERROR tool.ExportTool: Error during export: Export job failed! 新手找不到错误 求大神班帮我看看
spark-sql如何显示默认库名
启动spark-sql ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552009814_366438.jpg) 启动后spark-sql ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552009871_167191.jpg) 期望向hive启动一样 带默认库 ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552009941_168959.jpg) hive 配置文件 ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552010009_255879.jpg) spark-env配置 ![图片说明](https://img-ask.csdn.net/upload/201903/08/1552010079_625482.jpg) 希望大佬能帮忙解决下 谢谢
通过sqoop, load数据到hive,sqoop如何知道hive的warehouse
我创建了自己的hive-site.xml文件,在里边指定了hive的warehouse,现在的问题是:我通过sqoop,把数据从sqlserv导入到hive的时候,我如何让sqoop知道我用的是我自己的hive-site.xml文件,从而用自己配置的warehouse。我们不希望用默认的hive warehouse. 各位大神帮帮忙啊。
sparksql整合hive创建外部表报错(求大佬解答)
sparksql整合hive创建外部表的时候报错 建表语句如下: ``` create external table if not exists bdm.itcast_bdm_order_goods( user_id string,--用户ID order_id string,--订单ID order_no string,--订单号 sku_id bigint,--SKU编号 sku_name string,--SKU名称 goods_id bigint,--商品编号 ) partitioned by (dt string) row format delimited fields terminated by ',' lines terminated by '\n' location '/business/itcast_bdm_order_goods'; ``` 报如下错误: ``` **Moved: 'hdfs://hann/business/itcast_bdm_order_goods' to trash at: hdfs://hann/user/root/.Trash/Current Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: java.net.UnknownHostExc**eption: nhann); ``` 启动spark-sql的语句如下: ``` spark-sql --master spark://node01:7077 --driver-class-path /export/servers/hive-1.1.0-cdh5.14.0/lib/mysql-connector-java-5.1.38.jar --conf spark.sql.warehouse.dir=hdfs://hann/user/hive/warehouse ``` hive-site.xml配置文件如下: ``` <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://node03.hadoop.com:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <!-- <property> <name>hive.cli.print.current.db</name> <value>true</value> </property> <property> <name>hive.cli.print.header</name> <value>true</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>node03.hadoop.com</value> </property> <property> <name>hive.metastore.uris</name> <value>thrift://node03.hadoop.com:9083</value> </property> <property> <name>hive.metastore.client.socket.timeout</name> <value>3600</value> </property>--> </configuration> ```
hive启动 which: no hbase
hive安装完成后, 启动后包which: no hbase ,但是能创建数据库、能建表、能查询。hive 所连接舍数据库也多了一个hive库(元数据)。 1、网上都说在/hive/lib 目录下添加mysql-connector-java-5.1.47-bin.jar架包,我也添加了但并不起作用。 2、这里没有其他的error信息,我想请问一下hive的启动日志是在哪个目录下 3、想用beeline连接hive,是否需要安装habase ``` [root@devcrm ~]# hive which: no hbase in (/usr/local/kafka/zookeeper-3.4.10/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/open/maven/rj/apache-maven-3.5.2/bin:/usr/local/java/bin:/usr/local/kafka/hadoop-2.7.6/bin:/usr/local/kafka/hadoop-2.7.6/sbin:/usr/local/kafka/hive/bin:/root/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/kafka/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/kafka/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/usr/local/kafka/hive/lib/hive-common-2.3.0.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases. hive> use myhive; OK Time taken: 3.439 seconds hive> select * from student where name like '%小%; OK 95014 王小丽 女 19 CS 95019 邢小丽 女 19 IS 95010 孔小涛 男 19 CS 95011 包小柏 男 18 MA 95014 王小丽 女 19 CS 95019 邢小丽 女 19 IS 95010 孔小涛 男 19 CS 95011 包小柏 男 18 MA Time taken: 1.901 seconds, Fetched: 8 row(s) hive> ``` 这是hive连接的mysql数据库 ![图片说明](https://img-ask.csdn.net/upload/201904/23/1555982309_734580.png) hive/lib目录下添加的mysql驱动架包 ![图片说明](https://img-ask.csdn.net/upload/201904/23/1555982608_723323.png)
hive配置oarcle为metastore报错ORA-01754
在hive配置远程模式metastore为Oracle,启动正常,创建表时报错 ``` hive> create table dht_tab(name1 int,name45 varchar(50))row format delimited fields terminated by '\t'; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : ORA-01754: a table may contain only one column of type LONG java.sql.SQLSyntaxErrorException: ORA-01754: a table may contain only one column of type LONG ``` 根据网上修改"hive/lib/hive-metastore-1.2.1.jar"包中package.jdo文件,将LONGVARCHAR类型修改为CLOB,操作如下: ``` cd $HIVE\_HOME/lib mkdir temp cp hive-metastore-1.2.1.jar temp cd temp jar -xvf hive-metastore-1.2.1.jar sed -i -e 's/LONGVARCHAR/CLOB/g' package.jdo jar cfm hive-metastore-1.2.1.jar META-INF/MANIFEST.MF * cp hive-metastore-1.2.1.jar $HIVE_HOME/lib ``` 再通过 hive --service metastore 初始化hive 再进入hive创建表仍然报相同错误。 求解???? 还有一个问题 就是hive --service hiveserver2 没有反应 ![图片说明](https://img-ask.csdn.net/upload/201604/19/1461049192_64211.png) 感谢!
Hive新手问一下,这个语句为什么会报错Invalid column reference?
HIVESql: ![图片说明](https://img-ask.csdn.net/upload/201905/11/1557545260_846591.png) 报错截图: ![图片说明](https://img-ask.csdn.net/upload/201905/11/1557545348_741694.png) 本人hive新手,求大神指点一下。
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
MyBatis研习录(01)——MyBatis概述与入门
MyBatis 是一款优秀的持久层框架,它支持定制化 SQL、存储过程以及高级映射。MyBatis原本是apache的一个开源项目iBatis, 2010年该项目由apache software foundation 迁移到了google code并改名为MyBatis 。2013年11月MyBatis又迁移到Github。
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
Python爬虫精简步骤1 获取数据
爬虫,从本质上来说,就是利用程序在网上拿到对我们有价值的数据。 爬虫能做很多事,能做商业分析,也能做生活助手,比如:分析北京近两年二手房成交均价是多少?广州的Python工程师平均薪资是多少?北京哪家餐厅粤菜最好吃?等等。 这是个人利用爬虫所做到的事情,而公司,同样可以利用爬虫来实现巨大的商业价值。比如你所熟悉的搜索引擎——百度和谷歌,它们的核心技术之一也是爬虫,而且是超级爬虫。 从搜索巨头到人工...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
web前端javascript+jquery知识点总结
1.Javascript 语法.用途 javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
渗透测试-灰鸽子远控木马
木马概述 灰鸽子( Huigezi),原本该软件适用于公司和家庭管理,其功能十分强大,不但能监视摄像头、键盘记录、监控桌面、文件操作等。还提供了黑客专用功能,如:伪装系统图标、随意更换启动项名称和表述、随意更换端口、运行后自删除、毫无提示安装等,并采用反弹链接这种缺陷设计,使得使用者拥有最高权限,一经破解即无法控制。最终导致被黑客恶意使用。原作者的灰鸽子被定义为是一款集多种控制方式于一体的木马程序...
Python:爬取疫情每日数据
前言 目前每天各大平台,如腾讯、今日头条都会更新疫情每日数据,他们的数据源都是一样的,主要都是通过各地的卫健委官网通报。 以全国、湖北和上海为例,分别为以下三个网站: 国家卫健委官网:http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml 湖北卫健委官网:http://wjw.hubei.gov.cn/bmdt/ztzl/fkxxgzbdgrfyyq/xxfb...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允许使用这...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
粒子群算法求解物流配送路线问题(python)
1.Matlab实现粒子群算法的程序代码:https://www.cnblogs.com/kexinxin/p/9858664.html matlab代码求解函数最优值:https://blog.csdn.net/zyqblog/article/details/80829043 讲解通俗易懂,有数学实例的博文:https://blog.csdn.net/daaikuaichuan/article/...
教你如何编写第一个简单的爬虫
很多人知道爬虫,也很想利用爬虫去爬取自己想要的数据,那么爬虫到底怎么用呢?今天就教大家编写一个简单的爬虫。 下面以爬取笔者的个人博客网站为例获取第一篇文章的标题名称,教大家学会一个简单的爬虫。 第一步:获取页面 #!/usr/bin/python # coding: utf-8 import requests #引入包requests link = "http://www.santostang....
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c# 为空 判断 委托 c#记事本颜色 c# 系统默认声音 js中调用c#方法参数 c#引入dll文件报错 c#根据名称实例化 c#从邮件服务器获取邮件 c# 保存文件夹 c#代码打包引用 c# 压缩效率
立即提问