我把hive-site.xml放进spark/conf/里后报了一堆警告,怎么处理,不处理有影响吗?

之前配置的时候一直没发现忘记把hive-site.xml配置文件放到spark/conf中,今天把文件放进去,结果一打开pyspark就报一堆错,使用sparksql的时候也是报一堆警告,警告如下:
因为太长,所以先把想法写在这。我想知道怎么可以把这个提示的等级调高,或者怎么可以解决这些警告,麻烦大佬们帮忙看看,谢谢!

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.metastore.client.capability.check does not exist
2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist
2019-11-14 17:13:50,994 WARN conf.HiveConf: HiveConf of name hive.druid.broker.address.default does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve-fraction.min does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.orc.splits.ms.footer.cache.ppd.enabled does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.metastore.event.message.factory does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.hs2.user.access does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.druid.storage.storageDirectory does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.dynamic.semijoin.reduction.threshold does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.connect.retry.limit does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.xmx.headroom does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.dynamic.semijoin.reduction does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.direct does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.stats does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.client.consistent.splits does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.start does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.ttl does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.management.acl does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.delegation.token.lifetime does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.guidKey does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.ats.hook.queue.capacity does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.strict.checks.large.query does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.tez.bigtable.minsize.semijoin.reduction does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.min does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.user does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.alloc.size does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.wait.queue.comparator.class.name does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.port does not exist
2019-11-14 17:13:50,995 WARN conf.HiveConf: HiveConf of name hive.orc.cache.use.soft.references does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.enabled does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve.fraction.max does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.listener.thread-count does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.tez.container.max.java.heap.fraction does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.stats.column.autogather does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.am.liveness.heartbeat.interval.ms does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.decoding.metrics.percentiles.intervals does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.groupby.position.alias does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.metastore.txn.store.impl does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.spark.use.groupby.shuffle does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.object.cache.enabled does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.parallel.ops.in.session does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.groupby.limit.extrastep does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.ssl does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.location does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.delay.seconds does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.materializedview.fileformat does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.file.cleaner.threads does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.test.fail.compaction does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.blobstore.use.blobstore.as.scratchdir does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.service.metrics.class does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.mmap.path does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.download.permanent.fns does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.max.historic.queries does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.vectorized.execution.reducesink.new.enabled does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.compactor.max.num.delta does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.attempted does not exist
2019-11-14 17:13:50,996 WARN conf.HiveConf: HiveConf of name hive.server2.webui.port does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.compactor.initiator.failed.compacts.threshold does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.service.metrics.reporter does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.max.pending.writes does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.execution.mode does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.llap.enable.grace.join.in.llap does not exist
2019-11-14 17:13:50,999 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.mode does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.io.threadpool.size does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.druid.select.threshold does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.scratchdir.lock does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.spnego does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.frequency does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.hs2.coordinator.enabled does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.timeout.seconds does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.filter.stats.reduction does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.exec.orc.base.delta.ratio does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.fastpath does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.clear.dangling.scratchdir does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.test.fail.heartbeater does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.file.cleanup.delay.seconds does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.management.rpc.port does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.mapjoin.hybridgrace.bloomfilter does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.tree does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.stats.ndv.tuner does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.query.length does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.failed does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.close.session.on.disconnect does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.ppd.windowing does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.initial.metadata.count.enabled does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.orc.splits.ms.footer.cache.enabled does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup.min does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.file.metadata.threads does not exist
2019-11-14 17:13:51,000 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.service.refresh.interval.sec does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.output.size does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.driver.parallel.compilation does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.remote.token.requires.signing does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.cache.allow.synthetic.fileid does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.hash.table.inflation.factor does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.hbase.ttl does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.vectorized does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.writeset.reaper.interval does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.vector.serde.deserialize does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.order.columnalignment does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.service.send.buffer.size does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.exec.schema.evolution does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.elements.values.clause does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.server2.llap.concurrent.queries does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.allow.uber does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.partition.size.max does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.auto.auth does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.orc.splits.include.fileid does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.communicator.num.threads does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.orderby.position.alias does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.sleep.between.retries.ms does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.partitions does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.service.metrics.hadoop2.component does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.shuffle.port does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.direct.sql.max.elements.in.clause does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.druid.passiveWaitTimeMs does not exist
2019-11-14 17:13:51,001 WARN conf.HiveConf: HiveConf of name hive.load.dynamic.partitions.thread does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.segments.granularity does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.response.header.size does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.conf.internal.variable.list does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductionpercentage does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.repl.cm.enabled does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.limit does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.serialize.in.tasks does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.query.timeout.seconds does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.service.metrics.hadoop2.frequency does not exist
2019-11-14 17:13:51,002 WARN conf.HiveConf: HiveConf of name hive.orc.splits.directory.batch.ms does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.reader.wait does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.max.timeout.ms does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.max.open.txns does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.reduce.side does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.server2.zookeeper.publish.configs does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.auto.convert.join.hashtable.max.entries does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.server2.tez.sessions.init.threads does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.metastore.authorization.storage.check.externaltable.drop does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.execution.mode does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.cbo.cnf.maxnodes does not exist
2019-11-14 17:13:51,004 WARN conf.HiveConf: HiveConf of name hive.vectorized.adaptor.usage.mode does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.materializedview.rewriting does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.groupMembershipKey does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.catalog.cache.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.cbo.show.warnings does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.fshandler.threads does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.max.bloom.filter.entries does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.metadata.fraction does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.materializedview.serde does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.wait.queue.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.cache.entries does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.txn.operational.properties does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.memory.ttl does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.port does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.nonvector.wrapper.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.cache.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.vectorized.input.format does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.optimize.cte.materialize.threshold does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.clean.until does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.optimize.semijoin.conversion does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.port does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.metrics.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.repl.rootdir does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.limit.partition.request does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.async.log.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.logger does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.allow.udf.load.on.demand does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.cli.tez.session.async does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.bloom.filter.factor does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.am-reporter.max.threads does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.use.file.size.for.mapjoin does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning.compat does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.webui.spnego.principal does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.preemption.metrics.intervals does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.shuffle.dir.watcher.enabled does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.arena.count does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.use.SSL does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.timeout.ms does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.transpose.aggr.join does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.maxTries does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning.max.data.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.metadata.base does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.invalidator.frequency does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.lrfu does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.mmap does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.druid.coordinator.address.default does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.max.fetch.size does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.conf.hidden.list does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.io.sarg.cache.max.weight.mb does not exist
2019-11-14 17:13:51,005 WARN conf.HiveConf: HiveConf of name hive.server2.clear.dangling.scratchdir.interval does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.druid.sleep.time does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.vectorized.use.row.serde.deserialize does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.compile.lock.timeout does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.interval does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.variance does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.lrfu.lambda does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.druid.metadata.db.type does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.output.stream.timeout does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.transactional.events.mem does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.resultset.default.fetch.size does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.repl.cm.retain does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.merge.cardinality.check does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.server2.authentication.ldap.groupClassKey does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.allow.permanent.fns does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.ssl does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.txn.manager.dump.lock.state.on.acquire.timeout does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.succeeded does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.fileid.path does not exist
2019-11-14 17:13:51,006 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.slice.row.count does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.hashtable.probe.percent does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.am.use.fqdn does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.min.timeout.ms does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.validate.acls does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.support.special.characters.tablename does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.mv.files.thread does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.skip.compile.udf.check does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.vector.serde.enabled does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.repl.cm.interval does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.server2.sleep.interval.between.start.attempts does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.container.mb does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.druid.http.read.timeout does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.blobstore.optimizations.enabled does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.orc.gap.cache does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.optimize.dynamic.partition.hashjoin does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.exec.copyfile.maxnumfiles does not exist
2019-11-14 17:13:51,007 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.formats does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.druid.http.numConnection does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.enable.preemption does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.executors does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.full does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.connection.class does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.tez.sessions.custom.queue.allowed does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.slice.lrr does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.password does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.writer.wait does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.request.header.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.webui.max.threads does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductiontuples does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.test.rollbacktxn does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.num.schedulable.tasks.per.node does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.acl does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.strict.checks.type.safety does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.server2.async.exec.async.compile does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.input.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.tez.enable.memory.manager does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.msck.repair.batch.size does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.blobstore.supported.schemes does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.orc.splits.allow.synthetic.fileid does not exist
2019-11-14 17:13:51,008 WARN conf.HiveConf: HiveConf of name hive.stats.filter.in.factor does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.spark.use.op.stats does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.exec.input.listing.max.threads does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime.jitter does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.port does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.strict.checks.cartesian.product does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.num.handlers does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.vcpus.per.instance does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.count.open.txns.interval does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.min.bloom.filter.entries does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.optimize.partition.columns.separate does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.orc.cache.stripe.details.mem.size does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.txn.heartbeat.threadpool.size does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.locality.delay does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.repl.cmrootdir does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.disable.backoff.factor does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.spark.exec.inplace.progress does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.druid.working.directory does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.task.scale.memory.reserve.fraction does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.merge.nway.joins does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.txn.strict.locking.mode does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.io.encode.vector.serde.async.enabled does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.in.place.progress does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.druid.indexer.memory.rownum.max does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.server2.xsrf.filter.enabled does not exist
2019-11-14 17:13:51,009 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.4
      /_/

Using Python version 3.7.4 (default, Sep 20 2019 17:49:03)
SparkSession available as 'spark'.

1个回答

qq_45888663
qq_45888663 大神快来回答,我也有同样问题
5 个月之前 回复
pycrossover
铲子挖掘数据 额。。。我已经将hive-site.xml放到spark/conf目录下了,才出现以上的问题。
5 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
sparksql整合hive创建外部表报错(求大佬解答)
sparksql整合hive创建外部表的时候报错 建表语句如下: ``` create external table if not exists bdm.itcast_bdm_order_goods( user_id string,--用户ID order_id string,--订单ID order_no string,--订单号 sku_id bigint,--SKU编号 sku_name string,--SKU名称 goods_id bigint,--商品编号 ) partitioned by (dt string) row format delimited fields terminated by ',' lines terminated by '\n' location '/business/itcast_bdm_order_goods'; ``` 报如下错误: ``` **Moved: 'hdfs://hann/business/itcast_bdm_order_goods' to trash at: hdfs://hann/user/root/.Trash/Current Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: java.net.UnknownHostExc**eption: nhann); ``` 启动spark-sql的语句如下: ``` spark-sql --master spark://node01:7077 --driver-class-path /export/servers/hive-1.1.0-cdh5.14.0/lib/mysql-connector-java-5.1.38.jar --conf spark.sql.warehouse.dir=hdfs://hann/user/hive/warehouse ``` hive-site.xml配置文件如下: ``` <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://node03.hadoop.com:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <!-- <property> <name>hive.cli.print.current.db</name> <value>true</value> </property> <property> <name>hive.cli.print.header</name> <value>true</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>node03.hadoop.com</value> </property> <property> <name>hive.metastore.uris</name> <value>thrift://node03.hadoop.com:9083</value> </property> <property> <name>hive.metastore.client.socket.timeout</name> <value>3600</value> </property>--> </configuration> ```
cloudera manager 离线安装安装agent时,向主节点下载资源超时
错误日志: [19/Nov/2018 16:16:04 +0000] 2789 MainThread stacks_collection_manager INFO Using max_uncompressed_file_size_bytes: 5242880 [19/Nov/2018 16:16:04 +0000] 2789 MainThread __init__ INFO Importing metric schema from file /opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/schema.json [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Supervised processes will add the following to their environment (in addition to the supervisor's env): {'CDH_PARQUET_HOME': '/usr/lib/parquet', 'JSVC_HOME': '/usr/libexec/bigtop-utils', 'CMF_PACKAGE_DIR': '/opt/cloudera-manager/cm-5.10.2/lib64/cmf/service', 'CDH_HADOOP_BIN': '/usr/bin/hadoop', 'MGMT_HOME': '/opt/cloudera-manager/cm-5.10.2/share/cmf', 'CDH_IMPALA_HOME': '/usr/lib/impala', 'CDH_YARN_HOME': '/usr/lib/hadoop-yarn', 'CDH_HDFS_HOME': '/usr/lib/hadoop-hdfs', 'PATH': '/sbin:/usr/sbin:/bin:/usr/bin', 'CDH_HUE_PLUGINS_HOME': '/usr/lib/hadoop', 'CM_STATUS_CODES': u'STATUS_NONE HDFS_DFS_DIR_NOT_EMPTY HBASE_TABLE_DISABLED HBASE_TABLE_ENABLED JOBTRACKER_IN_STANDBY_MODE YARN_RM_IN_STANDBY_MODE', 'KEYTRUSTEE_KP_HOME': '/usr/share/keytrustee-keyprovider', 'CLOUDERA_ORACLE_CONNECTOR_JAR': '/usr/share/java/oracle-connector-java.jar', 'CDH_SQOOP2_HOME': '/usr/lib/sqoop2', 'KEYTRUSTEE_SERVER_HOME': '/usr/lib/keytrustee-server', 'CDH_MR2_HOME': '/usr/lib/hadoop-mapreduce', 'HIVE_DEFAULT_XML': '/etc/hive/conf.dist/hive-default.xml', 'CLOUDERA_POSTGRESQL_JDBC_JAR': '/opt/cloudera-manager/cm-5.10.2/share/cmf/lib/postgresql-9.0-801.jdbc4.jar', 'CDH_KMS_HOME': '/usr/lib/hadoop-kms', 'CDH_HBASE_HOME': '/usr/lib/hbase', 'CDH_SQOOP_HOME': '/usr/lib/sqoop', 'WEBHCAT_DEFAULT_XML': '/etc/hive-webhcat/conf.dist/webhcat-default.xml', 'CDH_OOZIE_HOME': '/usr/lib/oozie', 'CDH_ZOOKEEPER_HOME': '/usr/lib/zookeeper', 'CDH_HUE_HOME': '/usr/lib/hue', 'CLOUDERA_MYSQL_CONNECTOR_JAR': '/usr/share/java/mysql-connector-java.jar', 'CDH_HBASE_INDEXER_HOME': '/usr/lib/hbase-solr', 'CDH_MR1_HOME': '/usr/lib/hadoop-0.20-mapreduce', 'CDH_SOLR_HOME': '/usr/lib/solr', 'CDH_PIG_HOME': '/usr/lib/pig', 'CDH_SENTRY_HOME': '/usr/lib/sentry', 'CDH_CRUNCH_HOME': '/usr/lib/crunch', 'CDH_LLAMA_HOME': '/usr/lib/llama/', 'CDH_HTTPFS_HOME': '/usr/lib/hadoop-httpfs', 'ROOT': '/opt/cloudera-manager/cm-5.10.2/lib64/cmf', 'CDH_HADOOP_HOME': '/usr/lib/hadoop', 'CDH_HIVE_HOME': '/usr/lib/hive', 'ORACLE_HOME': '/usr/share/oracle/instantclient', 'CDH_HCAT_HOME': '/usr/lib/hive-hcatalog', 'CDH_KAFKA_HOME': '/usr/lib/kafka', 'CDH_SPARK_HOME': '/usr/lib/spark', 'TOMCAT_HOME': '/usr/lib/bigtop-tomcat', 'CDH_FLUME_HOME': '/usr/lib/flume-ng'} [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/config.ini. Environment variables for CDH locations are not used when CDH is installed from parcels. [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood to cloudera-scm (498) cloudera-scm (498) [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/flood to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Created /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor/include [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/supervisor/include to 0751 [19/Nov/2018 16:16:04 +0000] 2789 MainThread agent ERROR Failed to connect to previous supervisor. Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/agent.py", line 2073, in find_or_start_supervisor self.configure_supervisor_clients() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/agent.py", line 2254, in configure_supervisor_clients supervisor_options.realize(args=["-c", os.path.join(self.supervisor_dir, "supervisord.conf")]) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 1599, in realize Options.realize(self, *arg, **kw) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 333, in realize self.process_config() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 341, in process_config self.process_config_file(do_usage) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 376, in process_config_file self.usage(str(msg)) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/supervisor-3.0-py2.6.egg/supervisor/options.py", line 164, in usage self.exit(2) SystemExit: 2 [19/Nov/2018 16:16:04 +0000] 2789 MainThread tmpfs INFO Successfully mounted tmpfs at /opt/cloudera-manager/cm-5.10.2/run/cloudera-scm-agent/process [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Trying to connect to newly launched supervisor (Attempt 1) [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Supervisor version: 3.0, pid: 2821 [19/Nov/2018 16:16:05 +0000] 2789 MainThread agent INFO Successfully connected to supervisor [19/Nov/2018 16:16:05 +0000] 2789 MainThread status_server INFO Using maximum impala profile bundle size of 1073741824 bytes. [19/Nov/2018 16:16:05 +0000] 2789 MainThread status_server INFO Using maximum stacks log bundle size of 1073741824 bytes. [19/Nov/2018 16:16:05 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:05] ENGINE Bus STARTING [19/Nov/2018 16:16:05 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:05] ENGINE Started monitor thread '_TimeoutMonitor'. [19/Nov/2018 16:16:06 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:06] ENGINE Serving on yingzhi01.com:9000 [19/Nov/2018 16:16:06 +0000] 2789 MainThread _cplogging INFO [19/Nov/2018:16:16:06] ENGINE Bus STARTED [19/Nov/2018 16:16:06 +0000] 2789 MainThread __init__ INFO New monitor: (<cmf.monitor.host.HostMonitor object at 0x2990c50>,) [19/Nov/2018 16:16:06 +0000] 2789 MonitorDaemon-Scheduler __init__ INFO Monitor ready to report: ('HostMonitor',) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Setting default socket timeout to 30 [19/Nov/2018 16:16:06 +0000] 2789 Monitor-HostMonitor network_interfaces INFO NIC iface eth0 doesn't support ETHTOOL (95) [19/Nov/2018 16:16:06 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Error getting directory attributes for /opt/cloudera-manager/cm-5.10.2/log/cloudera-scm-agent Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/dir_monitor.py", line 90, in _get_directory_attributes name = pwd.getpwuid(uid)[0] KeyError: 'getpwuid(): uid not found: 1106' [19/Nov/2018 16:16:06 +0000] 2789 MainThread heartbeat_tracker INFO HB stats (seconds): num:1 LIFE_MIN:0.22 min:0.22 mean:0.22 max:0.22 LIFE_MAX:0.22 [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO CM server guid: dceeafae-a884-42f1-ba7b-4ee187ef3bef [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Using parcels directory from server provided value: /opt/cloudera/parcels [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent WARNING Expected user root for /opt/cloudera/parcels but was cloudera-scm [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent WARNING Expected group root for /opt/cloudera/parcels but was cloudera-scm [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Created /opt/cloudera/parcel-cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera/parcel-cache to root (0) root (0) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera/parcel-cache to 0755 [19/Nov/2018 16:16:06 +0000] 2789 MainThread parcel INFO Agent does create users/groups and apply file permissions [19/Nov/2018 16:16:06 +0000] 2789 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Flood daemon (re)start attempt [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Created /opt/cloudera/parcels/.flood [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chowning /opt/cloudera/parcels/.flood to cloudera-scm (498) cloudera-scm (498) [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Chmod'ing /opt/cloudera/parcels/.flood to 0755 [19/Nov/2018 16:16:06 +0000] 2789 MainThread agent INFO Triggering supervisord update. [19/Nov/2018 16:16:36 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent INFO Active parcel list updated; recalculating component info. [19/Nov/2018 16:16:36 +0000] 2789 MainThread throttling_logger WARNING CMF_AGENT_JAVA_HOME environment variable host override will be deprecated in future. JAVA_HOME setting configured from CM server takes precedence over host agent override. Configure JAVA_HOME setting from CM server. [19/Nov/2018 16:16:36 +0000] 2789 MainThread throttling_logger INFO Identified java component java8 with full version JAVA_HOME=/opt/modules/jdk1.8.0_144 java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode) for requested version . [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.6659779549 [19/Nov/2018 16:16:36 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:16:44 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Timeout with args ['ntpdc', '-np'] None [19/Nov/2018 16:16:44 +0000] 2789 Monitor-HostMonitor throttling_logger ERROR Failed to collect NTP metrics Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 48, in collect self.collect_ntpd() File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 66, in collect_ntpd result, stdout, stderr = self._subprocess_with_timeout(args, self._timeout) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/monitor/host/ntp_monitor.py", line 38, in _subprocess_with_timeout return subprocess_with_timeout(args, timeout) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/subprocess_timeout.py", line 94, in subprocess_with_timeout raise Exception("timeout with args %s" % args) Exception: timeout with args ['ntpdc', '-np'] [19/Nov/2018 16:17:06 +0000] 2789 DnsResolutionMonitor throttling_logger INFO Using java location: '/opt/modules/jdk1.8.0_144/bin/java'. [19/Nov/2018 16:17:06 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:17:06 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1082139015 [19/Nov/2018 16:17:06 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:17:36 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:17:36 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1235852242 [19/Nov/2018 16:17:36 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:18:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:18:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1040799618 [19/Nov/2018 16:18:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:18:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:18:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1849529743 [19/Nov/2018 16:18:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:19:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:19:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1211960316 [19/Nov/2018 16:19:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:19:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:19:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1215620041 [19/Nov/2018 16:19:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:01 +0000] 2789 CP Server Thread-4 _cplogging INFO 192.168.164.35 - - [19/Nov/2018:16:20:01] "GET /heartbeat HTTP/1.1" 200 2 "" "NING/1.0" [19/Nov/2018 16:20:04 +0000] 2789 CP Server Thread-5 _cplogging INFO 192.168.164.35 - - [19/Nov/2018:16:20:04] "GET /heartbeat HTTP/1.1" 200 2 "" "NING/1.0" [19/Nov/2018 16:20:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:20:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1212861538 [19/Nov/2018 16:20:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:37 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:20:37 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1753029823 [19/Nov/2018 16:20:37 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:20:37 +0000] 2789 Thread-13 downloader INFO Fetching torrent: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel.torrent [19/Nov/2018 16:20:37 +0000] 2789 Thread-13 downloader INFO Starting download of: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader ERROR Unexpected exception during download Traceback (most recent call last): File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/cmf/downloader.py", line 279, in download self.client.AddTorrent(torrent_url) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/cmd.py", line 159, in __call__ return self.fn.__get__(self.binding)(*args, **kwargs) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 68, in <lambda> return lambda *pargs, **kwargs: self._invoke(*pargs, **kwargs) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 77, in _invoke return rpcClient.requestor.request(self.schema.name, msg) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 129, in requestor return avro.ipc.Requestor(self.SCHEMA, self.transceiver) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.10.2-py2.6.egg/flood/util/rpc.py", line 125, in transceiver return avro.ipc.HTTPTransceiver(self.server.host, self.server.port) File "/opt/cloudera-manager/cm-5.10.2/lib64/cmf/agent/build/env/lib/python2.6/site-packages/avro-1.6.3-py2.6.egg/avro/ipc.py", line 469, in __init__ self.conn.connect() File "/usr/lib64/python2.6/httplib.py", line 771, in connect self.timeout) File "/usr/lib64/python2.6/socket.py", line 567, in create_connection raise error, msg timeout: timed out [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader INFO Finished download [ url: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel, state: exception, total_bytes: 0, downloaded_bytes: 0, start_time: 2018-11-19 16:20:37, download_end_time: , end_time: 2018-11-19 16:21:07, code: 600, exception_msg: timed out, path: None ] [19/Nov/2018 16:21:07 +0000] 2789 MainThread downloader ERROR Failed rack peer update: timed out [19/Nov/2018 16:21:07 +0000] 2789 MainThread agent WARNING Long HB processing time: 30.1247620583 [19/Nov/2018 16:21:07 +0000] 2789 MainThread agent WARNING Delayed HB: 15s since last [19/Nov/2018 16:21:07 +0000] 2789 Thread-13 downloader INFO Fetching torrent: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel.torrent [19/Nov/2018 16:21:08 +0000] 2789 Thread-13 downloader INFO Starting download of: http://yingzhi01.com:7180/cmf/parcel/download/CDH-5.10.2-1.cdh5.10.2.p0.5-el6.parcel [19/Nov/2018 16:21:38 +0000] 2789 Thread-13 downloader ERROR Unexpected exception during download 然后就是不断重复超时错误求大神指点。。。
habase 报错 ERROR: Can't get master address from ZooKeeper; znode data == null
hadoop + zookeeper +hbase 环境 hadoop 和 zookeeper 集群环境都ok,hbase启动之后,查看hbase状态报错 ![图片说明](https://img-ask.csdn.net/upload/201903/07/1551957383_266120.jpg) 网上的各种重启hbase 重启服务 修改配置文件都试过了,解决不了,跪求会的大神指导一下。 /etc/profile配置 ``` export JAVA_HOME=/opt/java/jdk1.8.0_201 export HADOOP_HOME=/opt/hadoop/hadoop-2.8.0 export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib" #export HIVE_HOME=/opt/hive/apache-hive-2.1.1-bin #export HIVE_CONF_DIR=${HIVE_HOME}/conf #export SQOOP_HOME=/opt/sqoop/sqoop-1.4.6.bin__hadoop-2.0.4-alpha export HBASE_HOME=/opt/hbase/hbase-1.4.9 export ZK_HOME=/opt/zookeeper/zookeeper-3.4.13 export CLASS_PATH=.:${JAVA_HOME}/lib:${HIVE_HOME}/lib:$CLASS_PATH export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${HIVE_HOME}/bin:${SQOOP_HOME}/bin:${HBASE_HOME}:${ZK_HOME}/bin:$PATH ``` hbase/conf/hbase.xml 配置 ``` <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://hadoop1:9000/hbase</value> <description>The directory shared byregion servers.</description> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/opt/hbase/zk_data</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> <description>Property from ZooKeeper'sconfig zoo.cfg. The port at which the clients will connect. </description> </property> <property> <name>zookeeper.session.timeout</name> <value>120000</value> </property> <property> <name>hbase.zookeeper.property.tickTime</name> <value>6000</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>hadoop1,hadoop2,hadoop3</value> </property> <property> <name>hbase.tmp.dir</name> <value>/root/hbase/tmp</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> ``` hbase/conf/conf/hbase-env.sh文件 ``` export HBASE_OPTS="-XX:+UseConcMarkSweepGC" # Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+ #export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m" #export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m" export JAVA_HOME=/opt/java/jdk1.8.0_201 export HADOOP_HOME=/opt/hadoop/hadoop-2.8.0 export HBASE_HOME=/opt/hbase/hbase-1.4.9 export HBASE_CLASSPATH=/opt/hadoop/hadoop-2.8.0/etc/hadoop export HBASE_PID_DIR=/root/hbase/pids export HBASE_MANAGES_ZK=false ``` zookeeper/zoo.cfg 配置文件 ``` tickTime=2000 initLimit=10 syncLimit=5 clientPort=2181 dataDir=/opt/zookeeper/data dataLogDir=/opt/zookeeper/dataLog server.1=hadoop1:2886:3881 server.2=hadoop2:2887:3882 server.3=hadoop3:2888:3883 quorumListenOnAllIPs=true ``` /opt/hadoop/hadoop-2.8.0/etc/hadoop/core-site.xml 文件 ``` <configuration> <property>         <name>hadoop.tmp.dir</name>         <value>/root/hadoop/tmp</value>         <description>Abase for other temporary directories.</description>    </property>    <property>         <name>fs.default.name</name>         <value>hdfs://hadoop1:9000</value>    </property> </configuration> ```
spark 读取不到hive metastore 获取不到数据库
直接上异常 ``` Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data01/hadoop/yarn/local/filecache/355/spark2-hdp-yarn-archive.tar.gz/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for TERM 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for HUP 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for INT 19/08/13 19:53:17 INFO SecurityManager: Changing view acls to: yarn,hdfs 19/08/13 19:53:17 INFO SecurityManager: Changing modify acls to: yarn,hdfs 19/08/13 19:53:17 INFO SecurityManager: Changing view acls groups to: 19/08/13 19:53:17 INFO SecurityManager: Changing modify acls groups to: 19/08/13 19:53:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set() 19/08/13 19:53:18 INFO ApplicationMaster: Preparing Local resources 19/08/13 19:53:19 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1565610088533_0087_000001 19/08/13 19:53:19 INFO ApplicationMaster: Starting the user application in a separate Thread 19/08/13 19:53:19 INFO ApplicationMaster: Waiting for spark context initialization... 19/08/13 19:53:19 INFO SparkContext: Running Spark version 2.3.0.2.6.5.0-292 19/08/13 19:53:19 INFO SparkContext: Submitted application: voice_stream 19/08/13 19:53:19 INFO SecurityManager: Changing view acls to: yarn,hdfs 19/08/13 19:53:19 INFO SecurityManager: Changing modify acls to: yarn,hdfs 19/08/13 19:53:19 INFO SecurityManager: Changing view acls groups to: 19/08/13 19:53:19 INFO SecurityManager: Changing modify acls groups to: 19/08/13 19:53:19 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set() 19/08/13 19:53:19 INFO Utils: Successfully started service 'sparkDriver' on port 20410. 19/08/13 19:53:19 INFO SparkEnv: Registering MapOutputTracker 19/08/13 19:53:19 INFO SparkEnv: Registering BlockManagerMaster 19/08/13 19:53:19 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/08/13 19:53:19 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/08/13 19:53:19 INFO DiskBlockManager: Created local directory at /data01/hadoop/yarn/local/usercache/hdfs/appcache/application_1565610088533_0087/blockmgr-94d35b97-43b2-496e-a4cb-73ecd3ed186c 19/08/13 19:53:19 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 19/08/13 19:53:19 INFO SparkEnv: Registering OutputCommitCoordinator 19/08/13 19:53:19 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 19/08/13 19:53:19 INFO Utils: Successfully started service 'SparkUI' on port 28852. 19/08/13 19:53:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://datanode02:28852 19/08/13 19:53:19 INFO YarnClusterScheduler: Created YarnClusterScheduler 19/08/13 19:53:20 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1565610088533_0087 and attemptId Some(appattempt_1565610088533_0087_000001) 19/08/13 19:53:20 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 31984. 19/08/13 19:53:20 INFO NettyBlockTransferService: Server created on datanode02:31984 19/08/13 19:53:20 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 19/08/13 19:53:20 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManagerMasterEndpoint: Registering block manager datanode02:31984 with 366.3 MB RAM, BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO EventLoggingListener: Logging events to hdfs:/spark2-history/application_1565610088533_0087_1 19/08/13 19:53:20 INFO ApplicationMaster: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>/usr/hdp/2.6.5.0-292/hadoop/conf<CPS>/usr/hdp/2.6.5.0-292/hadoop/*<CPS>/usr/hdp/2.6.5.0-292/hadoop/lib/*<CPS>/usr/hdp/current/hadoop-hdfs-client/*<CPS>/usr/hdp/current/hadoop-hdfs-client/lib/*<CPS>/usr/hdp/current/hadoop-yarn-client/*<CPS>/usr/hdp/current/hadoop-yarn-client/lib/*<CPS>/usr/hdp/current/ext/hadoop/*<CPS>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.6.5.0-292/hadoop/lib/hadoop-lzo-0.6.0.2.6.5.0-292.jar:/etc/hadoop/conf/secure:/usr/hdp/current/ext/hadoop/*<CPS>{{PWD}}/__spark_conf__/__hadoop_conf__ SPARK_YARN_STAGING_DIR -> *********(redacted) SPARK_USER -> *********(redacted) command: LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" \ {{JAVA_HOME}}/bin/java \ -server \ -Xmx5120m \ -Djava.io.tmpdir={{PWD}}/tmp \ '-Dspark.history.ui.port=18081' \ '-Dspark.rpc.message.maxSize=100' \ -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ -XX:OnOutOfMemoryError='kill %p' \ org.apache.spark.executor.CoarseGrainedExecutorBackend \ --driver-url \ spark://CoarseGrainedScheduler@datanode02:20410 \ --executor-id \ <executorId> \ --hostname \ <hostname> \ --cores \ 2 \ --app-id \ application_1565610088533_0087 \ --user-class-path \ file:$PWD/__app__.jar \ --user-class-path \ file:$PWD/hadoop-common-2.7.3.jar \ --user-class-path \ file:$PWD/guava-12.0.1.jar \ --user-class-path \ file:$PWD/hbase-server-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-protocol-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-client-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-common-1.2.8.jar \ --user-class-path \ file:$PWD/mysql-connector-java-5.1.44-bin.jar \ --user-class-path \ file:$PWD/spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar \ --user-class-path \ file:$PWD/spark-examples_2.11-1.6.0-typesafe-001.jar \ --user-class-path \ file:$PWD/fastjson-1.2.7.jar \ 1><LOG_DIR>/stdout \ 2><LOG_DIR>/stderr resources: spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar" } size: 12271027 timestamp: 1565697198603 type: FILE visibility: PRIVATE spark-examples_2.11-1.6.0-typesafe-001.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark-examples_2.11-1.6.0-typesafe-001.jar" } size: 1867746 timestamp: 1565697198751 type: FILE visibility: PRIVATE hbase-server-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-server-1.2.8.jar" } size: 4197896 timestamp: 1565697197770 type: FILE visibility: PRIVATE hbase-common-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-common-1.2.8.jar" } size: 570163 timestamp: 1565697198318 type: FILE visibility: PRIVATE __app__.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark_history_data2.jar" } size: 44924 timestamp: 1565697197260 type: FILE visibility: PRIVATE guava-12.0.1.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/guava-12.0.1.jar" } size: 1795932 timestamp: 1565697197614 type: FILE visibility: PRIVATE hbase-client-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-client-1.2.8.jar" } size: 1306401 timestamp: 1565697198180 type: FILE visibility: PRIVATE __spark_conf__ -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/__spark_conf__.zip" } size: 273513 timestamp: 1565697199131 type: ARCHIVE visibility: PRIVATE fastjson-1.2.7.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/fastjson-1.2.7.jar" } size: 417221 timestamp: 1565697198865 type: FILE visibility: PRIVATE hbase-protocol-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-protocol-1.2.8.jar" } size: 4366252 timestamp: 1565697198023 type: FILE visibility: PRIVATE __spark_libs__ -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/hdp/apps/2.6.5.0-292/spark2/spark2-hdp-yarn-archive.tar.gz" } size: 227600110 timestamp: 1549953820247 type: ARCHIVE visibility: PUBLIC mysql-connector-java-5.1.44-bin.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/mysql-connector-java-5.1.44-bin.jar" } size: 999635 timestamp: 1565697198445 type: FILE visibility: PRIVATE hadoop-common-2.7.3.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hadoop-common-2.7.3.jar" } size: 3479293 timestamp: 1565697197476 type: FILE visibility: PRIVATE =============================================================================== 19/08/13 19:53:20 INFO RMProxy: Connecting to ResourceManager at namenode02/10.1.38.38:8030 19/08/13 19:53:20 INFO YarnRMClient: Registering the ApplicationMaster 19/08/13 19:53:20 INFO YarnAllocator: Will request 3 executor container(s), each with 2 core(s) and 5632 MB memory (including 512 MB of overhead) 19/08/13 19:53:20 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@datanode02:20410) 19/08/13 19:53:20 INFO YarnAllocator: Submitted 3 unlocalized container requests. 19/08/13 19:53:20 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 19/08/13 19:53:20 INFO AMRMClientImpl: Received new token for : datanode03:45454 19/08/13 19:53:21 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000002 on host datanode03 for executor with ID 1 19/08/13 19:53:21 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: Opening proxy : datanode03:45454 19/08/13 19:53:21 INFO AMRMClientImpl: Received new token for : datanode01:45454 19/08/13 19:53:21 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000003 on host datanode01 for executor with ID 2 19/08/13 19:53:21 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: Opening proxy : datanode01:45454 19/08/13 19:53:22 INFO AMRMClientImpl: Received new token for : datanode02:45454 19/08/13 19:53:22 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000004 on host datanode02 for executor with ID 3 19/08/13 19:53:22 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:22 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:22 INFO ContainerManagementProtocolProxy: Opening proxy : datanode02:45454 19/08/13 19:53:24 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.198.144:41122) with ID 1 19/08/13 19:53:25 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.229.163:24656) with ID 3 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode03:3328 with 2.5 GB RAM, BlockManagerId(1, datanode03, 3328, None) 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode02:28863 with 2.5 GB RAM, BlockManagerId(3, datanode02, 28863, None) 19/08/13 19:53:25 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.229.158:64276) with ID 2 19/08/13 19:53:25 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 19/08/13 19:53:25 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode01:20487 with 2.5 GB RAM, BlockManagerId(2, datanode01, 20487, None) 19/08/13 19:53:25 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect. 19/08/13 19:53:25 INFO SparkContext: Starting job: start at VoiceApplication2.java:128 19/08/13 19:53:25 INFO DAGScheduler: Registering RDD 1 (start at VoiceApplication2.java:128) 19/08/13 19:53:25 INFO DAGScheduler: Got job 0 (start at VoiceApplication2.java:128) with 20 output partitions 19/08/13 19:53:25 INFO DAGScheduler: Final stage: ResultStage 1 (start at VoiceApplication2.java:128) 19/08/13 19:53:25 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0) 19/08/13 19:53:25 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0) 19/08/13 19:53:26 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[1] at start at VoiceApplication2.java:128), which has no missing parents 19/08/13 19:53:26 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.1 KB, free 366.3 MB) 19/08/13 19:53:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2011.0 B, free 366.3 MB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode02:31984 (size: 2011.0 B, free: 366.3 MB) 19/08/13 19:53:26 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:26 INFO DAGScheduler: Submitting 50 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[1] at start at VoiceApplication2.java:128) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)) 19/08/13 19:53:26 INFO YarnClusterScheduler: Adding task set 0.0 with 50 tasks 19/08/13 19:53:26 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, datanode02, executor 3, partition 0, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, datanode03, executor 1, partition 1, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, datanode01, executor 2, partition 2, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, datanode02, executor 3, partition 3, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, datanode03, executor 1, partition 4, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, datanode01, executor 2, partition 5, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode02:28863 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode03:3328 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode01:20487 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, datanode02, executor 3, partition 6, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, datanode02, executor 3, partition 7, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 693 ms on datanode02 (executor 3) (1/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 712 ms on datanode02 (executor 3) (2/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, datanode02, executor 3, partition 8, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 21 ms on datanode02 (executor 3) (3/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, datanode02, executor 3, partition 9, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 26 ms on datanode02 (executor 3) (4/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, datanode02, executor 3, partition 10, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 23 ms on datanode02 (executor 3) (5/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, datanode02, executor 3, partition 11, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 25 ms on datanode02 (executor 3) (6/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 12.0 in stage 0.0 (TID 12, datanode02, executor 3, partition 12, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 18 ms on datanode02 (executor 3) (7/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 14 ms on datanode02 (executor 3) (8/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 13.0 in stage 0.0 (TID 13, datanode02, executor 3, partition 13, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 14.0 in stage 0.0 (TID 14, datanode02, executor 3, partition 14, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 12.0 in stage 0.0 (TID 12) in 16 ms on datanode02 (executor 3) (9/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 15.0 in stage 0.0 (TID 15, datanode02, executor 3, partition 15, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 13.0 in stage 0.0 (TID 13) in 22 ms on datanode02 (executor 3) (10/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 16.0 in stage 0.0 (TID 16, datanode02, executor 3, partition 16, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 14.0 in stage 0.0 (TID 14) in 16 ms on datanode02 (executor 3) (11/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 17.0 in stage 0.0 (TID 17, datanode02, executor 3, partition 17, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 15.0 in stage 0.0 (TID 15) in 13 ms on datanode02 (executor 3) (12/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 18.0 in stage 0.0 (TID 18, datanode01, executor 2, partition 18, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 19.0 in stage 0.0 (TID 19, datanode01, executor 2, partition 19, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 787 ms on datanode01 (executor 2) (13/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 789 ms on datanode01 (executor 2) (14/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 20.0 in stage 0.0 (TID 20, datanode03, executor 1, partition 20, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 21.0 in stage 0.0 (TID 21, datanode03, executor 1, partition 21, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 905 ms on datanode03 (executor 1) (15/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 907 ms on datanode03 (executor 1) (16/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 22.0 in stage 0.0 (TID 22, datanode02, executor 3, partition 22, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 23.0 in stage 0.0 (TID 23, datanode02, executor 3, partition 23, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 24.0 in stage 0.0 (TID 24, datanode01, executor 2, partition 24, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 18.0 in stage 0.0 (TID 18) in 124 ms on datanode01 (executor 2) (17/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 16.0 in stage 0.0 (TID 16) in 134 ms on datanode02 (executor 3) (18/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 25.0 in stage 0.0 (TID 25, datanode01, executor 2, partition 25, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 26.0 in stage 0.0 (TID 26, datanode03, executor 1, partition 26, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 17.0 in stage 0.0 (TID 17) in 134 ms on datanode02 (executor 3) (19/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 20.0 in stage 0.0 (TID 20) in 122 ms on datanode03 (executor 1) (20/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 27.0 in stage 0.0 (TID 27, datanode03, executor 1, partition 27, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 19.0 in stage 0.0 (TID 19) in 127 ms on datanode01 (executor 2) (21/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 21.0 in stage 0.0 (TID 21) in 123 ms on datanode03 (executor 1) (22/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 28.0 in stage 0.0 (TID 28, datanode02, executor 3, partition 28, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 29.0 in stage 0.0 (TID 29, datanode02, executor 3, partition 29, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 22.0 in stage 0.0 (TID 22) in 19 ms on datanode02 (executor 3) (23/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 23.0 in stage 0.0 (TID 23) in 18 ms on datanode02 (executor 3) (24/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 30.0 in stage 0.0 (TID 30, datanode01, executor 2, partition 30, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 31.0 in stage 0.0 (TID 31, datanode01, executor 2, partition 31, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 25.0 in stage 0.0 (TID 25) in 27 ms on datanode01 (executor 2) (25/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 24.0 in stage 0.0 (TID 24) in 29 ms on datanode01 (executor 2) (26/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 32.0 in stage 0.0 (TID 32, datanode02, executor 3, partition 32, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 29.0 in stage 0.0 (TID 29) in 16 ms on datanode02 (executor 3) (27/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 33.0 in stage 0.0 (TID 33, datanode03, executor 1, partition 33, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 26.0 in stage 0.0 (TID 26) in 30 ms on datanode03 (executor 1) (28/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 34.0 in stage 0.0 (TID 34, datanode02, executor 3, partition 34, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 28.0 in stage 0.0 (TID 28) in 21 ms on datanode02 (executor 3) (29/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 35.0 in stage 0.0 (TID 35, datanode03, executor 1, partition 35, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 27.0 in stage 0.0 (TID 27) in 32 ms on datanode03 (executor 1) (30/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 36.0 in stage 0.0 (TID 36, datanode02, executor 3, partition 36, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 32.0 in stage 0.0 (TID 32) in 11 ms on datanode02 (executor 3) (31/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 37.0 in stage 0.0 (TID 37, datanode01, executor 2, partition 37, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 30.0 in stage 0.0 (TID 30) in 18 ms on datanode01 (executor 2) (32/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 38.0 in stage 0.0 (TID 38, datanode01, executor 2, partition 38, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 31.0 in stage 0.0 (TID 31) in 20 ms on datanode01 (executor 2) (33/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 39.0 in stage 0.0 (TID 39, datanode03, executor 1, partition 39, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 33.0 in stage 0.0 (TID 33) in 17 ms on datanode03 (executor 1) (34/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 34.0 in stage 0.0 (TID 34) in 17 ms on datanode02 (executor 3) (35/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 40.0 in stage 0.0 (TID 40, datanode02, executor 3, partition 40, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 41.0 in stage 0.0 (TID 41, datanode03, executor 1, partition 41, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 35.0 in stage 0.0 (TID 35) in 17 ms on datanode03 (executor 1) (36/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 42.0 in stage 0.0 (TID 42, datanode02, executor 3, partition 42, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 36.0 in stage 0.0 (TID 36) in 16 ms on datanode02 (executor 3) (37/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 43.0 in stage 0.0 (TID 43, datanode01, executor 2, partition 43, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 37.0 in stage 0.0 (TID 37) in 16 ms on datanode01 (executor 2) (38/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 44.0 in stage 0.0 (TID 44, datanode02, executor 3, partition 44, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 45.0 in stage 0.0 (TID 45, datanode02, executor 3, partition 45, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 40.0 in stage 0.0 (TID 40) in 14 ms on datanode02 (executor 3) (39/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 42.0 in stage 0.0 (TID 42) in 11 ms on datanode02 (executor 3) (40/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 46.0 in stage 0.0 (TID 46, datanode03, executor 1, partition 46, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 39.0 in stage 0.0 (TID 39) in 20 ms on datanode03 (executor 1) (41/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 47.0 in stage 0.0 (TID 47, datanode03, executor 1, partition 47, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 41.0 in stage 0.0 (TID 41) in 20 ms on datanode03 (executor 1) (42/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 48.0 in stage 0.0 (TID 48, datanode01, executor 2, partition 48, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 49.0 in stage 0.0 (TID 49, datanode01, executor 2, partition 49, PROCESS_LOCAL, 7888 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 43.0 in stage 0.0 (TID 43) in 18 ms on datanode01 (executor 2) (43/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 38.0 in stage 0.0 (TID 38) in 31 ms on datanode01 (executor 2) (44/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 45.0 in stage 0.0 (TID 45) in 11 ms on datanode02 (executor 3) (45/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 44.0 in stage 0.0 (TID 44) in 16 ms on datanode02 (executor 3) (46/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 46.0 in stage 0.0 (TID 46) in 18 ms on datanode03 (executor 1) (47/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 48.0 in stage 0.0 (TID 48) in 15 ms on datanode01 (executor 2) (48/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 47.0 in stage 0.0 (TID 47) in 15 ms on datanode03 (executor 1) (49/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 49.0 in stage 0.0 (TID 49) in 25 ms on datanode01 (executor 2) (50/50) 19/08/13 19:53:27 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 19/08/13 19:53:27 INFO DAGScheduler: ShuffleMapStage 0 (start at VoiceApplication2.java:128) finished in 1.174 s 19/08/13 19:53:27 INFO DAGScheduler: looking for newly runnable stages 19/08/13 19:53:27 INFO DAGScheduler: running: Set() 19/08/13 19:53:27 INFO DAGScheduler: waiting: Set(ResultStage 1) 19/08/13 19:53:27 INFO DAGScheduler: failed: Set() 19/08/13 19:53:27 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[2] at start at VoiceApplication2.java:128), which has no missing parents 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.2 KB, free 366.3 MB) 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1979.0 B, free 366.3 MB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode02:31984 (size: 1979.0 B, free: 366.3 MB) 19/08/13 19:53:27 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:27 INFO DAGScheduler: Submitting 20 missing tasks from ResultStage 1 (ShuffledRDD[2] at start at VoiceApplication2.java:128) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)) 19/08/13 19:53:27 INFO YarnClusterScheduler: Adding task set 1.0 with 20 tasks 19/08/13 19:53:27 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 50, datanode03, executor 1, partition 0, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 51, datanode02, executor 3, partition 1, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 52, datanode01, executor 2, partition 3, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 53, datanode03, executor 1, partition 2, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 54, datanode02, executor 3, partition 4, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 55, datanode01, executor 2, partition 5, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode02:28863 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode01:20487 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode03:3328 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.229.163:24656 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.198.144:41122 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.229.158:64276 19/08/13 19:53:27 INFO TaskSetManager: Starting task 7.0 in stage 1.0 (TID 56, datanode03, executor 1, partition 7, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 53) in 192 ms on datanode03 (executor 1) (1/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 8.0 in stage 1.0 (TID 57, datanode03, executor 1, partition 8, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 7.0 in stage 1.0 (TID 56) in 25 ms on datanode03 (executor 1) (2/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 58, datanode02, executor 3, partition 6, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 51) in 220 ms on datanode02 (executor 3) (3/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 14.0 in stage 1.0 (TID 59, datanode03, executor 1, partition 14, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 57) in 17 ms on datanode03 (executor 1) (4/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 16.0 in stage 1.0 (TID 60, datanode03, executor 1, partition 16, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 14.0 in stage 1.0 (TID 59) in 15 ms on datanode03 (executor 1) (5/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 16.0 in stage 1.0 (TID 60) in 21 ms on datanode03 (executor 1) (6/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 9.0 in stage 1.0 (TID 61, datanode02, executor 3, partition 9, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 54) in 269 ms on datanode02 (executor 3) (7/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 50) in 339 ms on datanode03 (executor 1) (8/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 10.0 in stage 1.0 (TID 62, datanode02, executor 3, partition 10, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 58) in 56 ms on datanode02 (executor 3) (9/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 11.0 in stage 1.0 (TID 63, datanode01, executor 2, partition 11, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 55) in 284 ms on datanode01 (executor 2) (10/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 12.0 in stage 1.0 (TID 64, datanode01, executor 2, partition 12, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 52) in 287 ms on datanode01 (executor 2) (11/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 13.0 in stage 1.0 (TID 65, datanode02, executor 3, partition 13, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 15.0 in stage 1.0 (TID 66, datanode02, executor 3, partition 15, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 10.0 in stage 1.0 (TID 62) in 25 ms on datanode02 (executor 3) (12/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 9.0 in stage 1.0 (TID 61) in 29 ms on datanode02 (executor 3) (13/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 17.0 in stage 1.0 (TID 67, datanode02, executor 3, partition 17, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 15.0 in stage 1.0 (TID 66) in 13 ms on datanode02 (executor 3) (14/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 13.0 in stage 1.0 (TID 65) in 16 ms on datanode02 (executor 3) (15/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 18.0 in stage 1.0 (TID 68, datanode02, executor 3, partition 18, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 19.0 in stage 1.0 (TID 69, datanode01, executor 2, partition 19, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 11.0 in stage 1.0 (TID 63) in 30 ms on datanode01 (executor 2) (16/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 12.0 in stage 1.0 (TID 64) in 30 ms on datanode01 (executor 2) (17/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 17.0 in stage 1.0 (TID 67) in 17 ms on datanode02 (executor 3) (18/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 19.0 in stage 1.0 (TID 69) in 13 ms on datanode01 (executor 2) (19/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 18.0 in stage 1.0 (TID 68) in 20 ms on datanode02 (executor 3) (20/20) 19/08/13 19:53:27 INFO YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 19/08/13 19:53:27 INFO DAGScheduler: ResultStage 1 (start at VoiceApplication2.java:128) finished in 0.406 s 19/08/13 19:53:27 INFO DAGScheduler: Job 0 finished: start at VoiceApplication2.java:128, took 1.850883 s 19/08/13 19:53:27 INFO ReceiverTracker: Starting 1 receivers 19/08/13 19:53:27 INFO ReceiverTracker: ReceiverTracker started 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@4044ec97 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO MappedDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO MappedDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO MappedDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@5dd4b960 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@132d0c3c 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO MappedDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO MappedDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO MappedDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@5dd4b960 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@525bed0c 19/08/13 19:53:27 INFO DAGScheduler: Got job 1 (start at VoiceApplication2.java:128) with 1 output partitions 19/08/13 19:53:27 INFO DAGScheduler: Final stage: ResultStage 2 (start at VoiceApplication2.java:128) 19/08/13 19:53:27 INFO DAGScheduler: Parents of final stage: List() 19/08/13 19:53:27 INFO DAGScheduler: Missing parents: List() 19/08/13 19:53:27 INFO DAGScheduler: Submitting ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:613), which has no missing parents 19/08/13 19:53:27 INFO ReceiverTracker: Receiver 0 started 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 133.5 KB, free 366.2 MB) 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 36.3 KB, free 366.1 MB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on datanode02:31984 (size: 36.3 KB, free: 366.3 MB) 19/08/13 19:53:27 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:27 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:613) (first 15 tasks are for partitions Vector(0)) 19/08/13 19:53:27 INFO YarnClusterScheduler: Adding task set 2.0 with 1 tasks 19/08/13 19:53:27 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 70, datanode01, executor 2, partition 0, PROCESS_LOCAL, 8757 bytes) 19/08/13 19:53:27 INFO RecurringTimer: Started timer for JobGenerator at time 1565697240000 19/08/13 19:53:27 INFO JobGenerator: Started JobGenerator at 1565697240000 ms 19/08/13 19:53:27 INFO JobScheduler: Started JobScheduler 19/08/13 19:53:27 INFO StreamingContext: StreamingContext started 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on datanode01:20487 (size: 36.3 KB, free: 2.5 GB) 19/08/13 19:53:27 INFO ReceiverTracker: Registered receiver for stream 0 from 10.1.229.158:64276 19/08/13 19:54:00 INFO JobScheduler: Added jobs for time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.0 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.1 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Finished job streaming job 1565697240000 ms.1 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Finished job streaming job 1565697240000 ms.0 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.2 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO SharedState: loading hive config file: file:/data01/hadoop/yarn/local/usercache/hdfs/filecache/85431/__spark_conf__.zip/__hadoop_conf__/hive-site.xml 19/08/13 19:54:00 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('hdfs://CID-042fb939-95b4-4b74-91b8-9f94b999bdf7/apps/hive/warehouse'). 19/08/13 19:54:00 INFO SharedState: Warehouse path is 'hdfs://CID-042fb939-95b4-4b74-91b8-9f94b999bdf7/apps/hive/warehouse'. 19/08/13 19:54:00 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode02:31984 in memory (size: 1979.0 B, free: 366.3 MB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode02:28863 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode01:20487 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode03:3328 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:02 INFO CodeGenerator: Code generated in 175.416957 ms 19/08/13 19:54:02 INFO JobScheduler: Finished job streaming job 1565697240000 ms.2 from job set of time 1565697240000 ms 19/08/13 19:54:02 ERROR JobScheduler: Error running job streaming job 1565697240000 ms.2 org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/13 19:54:02 ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/13 19:54:02 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ) 19/08/13 19:54:02 INFO StreamingContext: Invoking stop(stopGracefully=true) from shutdown hook 19/08/13 19:54:02 INFO ReceiverTracker: Sent stop signal to all 1 receivers 19/08/13 19:54:02 ERROR ReceiverTracker: Deregistered receiver for stream 0: Stopped by driver 19/08/13 19:54:02 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 70) in 35055 ms on datanode01 (executor 2) (1/1) 19/08/13 19:54:02 INFO YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool 19/08/13 19:54:02 INFO DAGScheduler: ResultStage 2 (start at VoiceApplication2.java:128) finished in 35.086 s 19/08/13 19:54:02 INFO ReceiverTracker: Waiting for receiver job to terminate gracefully 19/08/13 19:54:02 INFO ReceiverTracker: Waited for receiver job to terminate gracefully 19/08/13 19:54:02 INFO ReceiverTracker: All of the receivers have deregistered successfully 19/08/13 19:54:02 INFO ReceiverTracker: ReceiverTracker stopped 19/08/13 19:54:02 INFO JobGenerator: Stopping JobGenerator gracefully 19/08/13 19:54:02 INFO JobGenerator: Waiting for all received blocks to be consumed for job generation 19/08/13 19:54:02 INFO JobGenerator: Waited for all received blocks to be consumed for job generation 19/08/13 19:54:12 WARN ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67) 19/08/13 19:54:12 ERROR Utils: Uncaught exception in thread pool-1-thread-1 java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.spark.streaming.util.RecurringTimer.stop(RecurringTimer.scala:86) at org.apache.spark.streaming.scheduler.JobGenerator.stop(JobGenerator.scala:137) at org.apache.spark.streaming.scheduler.JobScheduler.stop(JobScheduler.scala:123) at org.apache.spark.streaming.StreamingContext$$anonfun$stop$1.apply$mcV$sp(StreamingContext.scala:681) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1357) at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:680) at org.apache.spark.streaming.StreamingContext.org$apache$spark$streaming$StreamingContext$$stopOnShutdown(StreamingContext.scala:714) at org.apache.spark.streaming.StreamingContext$$anonfun$start$1.apply$mcV$sp(StreamingContext.scala:599) at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1988) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ```
在中国程序员是青春饭吗?
今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...
我在支付宝花了1分钟,查到了女朋友的开房记录!
在大数据时代下,不管你做什么都会留下蛛丝马迹,只要学会把各种软件运用到极致,捉奸简直轻而易举。今天就来给大家分享一下,什么叫大数据抓出轨。据史料证明,马爸爸年轻时曾被...
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
20道你必须要背会的微服务面试题,面试一定会被问到
写在前面: 在学习springcloud之前大家一定要先了解下,常见的面试题有那块,然后我们带着问题去学习这个微服务技术,那么就会更加理解springcloud技术。如果你已经学了springcloud,那么在准备面试的时候,一定要看看看这些面试题。 文章目录1、什么是微服务?2、微服务之间是如何通讯的?3、springcloud 与dubbo有哪些区别?4、请谈谈对SpringBoot 和S...
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试,面试官没想到一个ArrayList,我都能跟他扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
2020 年,大火的 Python 和 JavaScript 是否会被取而代之?
Python 和 JavaScript 是目前最火的两大编程语言,但是2020 年,什么编程语言将会取而代之呢? 作者 |Richard Kenneth Eng 译者 |明明如月,责编 | 郭芮 出品 | CSDN(ID:CSDNnews) 以下为译文: Python 和 JavaScript 是目前最火的两大编程语言。然而,他们不可能永远屹立不倒。最终,必将像其他编程语言一...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
最全最强!世界大学计算机专业排名总结!
我正在参与CSDN200进20,希望得到您的支持,扫码续投票5次。感谢您! (为表示感谢,您投票后私信我,我把我总结的人工智能手推笔记和思维导图发送给您,感谢!) 目录 泰晤士高等教育世界大学排名 QS 世界大学排名 US News 世界大学排名 世界大学学术排名(Academic Ranking of World Universities) 泰晤士高等教育世界大学排名 中国共...
作为一名大学生,如何在B站上快乐的学习?
B站是个宝,谁用谁知道???? 作为一名大学生,你必须掌握的一项能力就是自学能力,很多看起来很牛X的人,你可以了解下,人家私底下一定是花大量的时间自学的,你可能会说,我也想学习啊,可是嘞,该学习啥嘞,不怕告诉你,互联网时代,最不缺的就是学习资源,最宝贵的是啥? 你可能会说是时间,不,不是时间,而是你的注意力,懂了吧! 那么,你说学习资源多,我咋不知道,那今天我就告诉你一个你必须知道的学习的地方,人称...
那些年,我们信了课本里的那些鬼话
教材永远都是有错误的,从小学到大学,我们不断的学习了很多错误知识。 斑羚飞渡 在我们学习的很多小学课文里,有很多是错误文章,或者说是假课文。像《斑羚飞渡》: 随着镰刀头羊的那声吼叫,整个斑羚群迅速分成两拨,老年斑羚为一拨,年轻斑羚为一拨。 就在这时,我看见,从那拨老斑羚里走出一只公斑羚来。公斑羚朝那拨年轻斑羚示意性地咩了一声,一只半大的斑羚应声走了出来。一老一少走到伤心崖,后退了几步,突...
使用 Python 和百度语音识别生成视频字幕
文章目录从视频中提取音频根据静音对音频分段使用百度语音识别获取 Access Token使用 Raw 数据进行合成生成字幕总结 从视频中提取音频 安装 moviepy pip install moviepy 相关代码: audio_file = work_path + '\\out.wav' video = VideoFileClip(video_file) video.audio.write_...
一个程序在计算机中是如何运行的?超级干货!!!
强烈声明:本文很干,请自备茶水!???? 开门见山,咱不说废话! 你有没有想过,你写的程序,是如何在计算机中运行的吗?比如我们搞Java的,肯定写过这段代码 public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } ...
【蘑菇街技术部年会】程序员与女神共舞,鼻血再次没止住。(文末内推)
蘑菇街技术部的年会,别开生面,一样全是美女。
那个在阿里养猪的工程师,5年了……
简介: 在阿里,走过1825天,没有趴下,依旧斗志满满,被称为“五年陈”。他们会被授予一枚戒指,过程就叫做“授戒仪式”。今天,咱们听听阿里的那些“五年陈”们的故事。 下一个五年,猪圈见! 我就是那个在养猪场里敲代码的工程师,一年多前我和20位工程师去了四川的猪场,出发前总架构师慷慨激昂的说:同学们,中国的养猪产业将因为我们而改变。但到了猪场,发现根本不是那么回事:要个WIFI,没有;...
为什么程序猿都不愿意去外包?
分享外包的组织架构,盈利模式,亲身经历,以及根据一些外包朋友的反馈,写了这篇文章 ,希望对正在找工作的老铁有所帮助
Java校招入职华为,半年后我跑路了
何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...
世界上有哪些代码量很少,但很牛逼很经典的算法或项目案例?
点击上方蓝字设为星标下面开始今天的学习~今天分享四个代码量很少,但很牛逼很经典的算法或项目案例。1、no code 项目地址:https://github.com/kelseyhight...
Python全栈 Linux基础之3.Linux常用命令
Linux对文件(包括目录)有很多常用命令,可以加快开发效率:ls是列出当前目录下的文件列表,选项有-a、-l、-h,还可以使用通配符;c功能是跳转目录,可以使用相对路径和绝对路径;mkdir命令创建一个新的目录,有-p选项,rm删除文件或目录,有-f、-r选项;cp用于复制文件,有-i、-r选项,tree命令可以将目录结构显示出来(树状显示),有-d选项,mv用来移动文件/目录,有-i选项;cat查看文件内容,more分屏显示文件内容,grep搜索内容;>、>>将执行结果重定向到一个文件;|用于管道输出。
​两年前不知如何编写代码的我,现在是一名人工智能工程师
全文共3526字,预计学习时长11分钟 图源:Unsplash 经常有小伙伴私信给小芯,我没有编程基础,不会写代码,如何进入AI行业呢?还能赶上AI浪潮吗? 任何时候努力都不算晚。 下面,小芯就给大家讲一个朋友的真实故事,希望能给那些处于迷茫与徘徊中的小伙伴们一丝启发。(下文以第一人称叙述) 图源:Unsplash 正如Elsa所说,职业转换是...
强烈推荐10本程序员必读的书
很遗憾,这个春节注定是刻骨铭心的,新型冠状病毒让每个人的神经都是紧绷的。那些处在武汉的白衣天使们,尤其值得我们的尊敬。而我们这些窝在家里的程序员,能不外出就不外出,就是对社会做出的最大的贡献。 有些读者私下问我,窝了几天,有点颓丧,能否推荐几本书在家里看看。我花了一天的时间,挑选了 10 本我最喜欢的书,你可以挑选感兴趣的来读一读。读书不仅可以平复恐惧的压力,还可以对未来充满希望,毕竟苦难终将会...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
非典逼出了淘宝和京东,新冠病毒能够逼出什么?
loonggg读完需要5分钟速读仅需 2 分钟大家好,我是你们的校长。我知道大家在家里都憋坏了,大家可能相对于封闭在家里“坐月子”,更希望能够早日上班。今天我带着大家换个思路来聊一个问题...
牛逼!一行代码居然能解决这么多曾经困扰我半天的算法题
春节假期这么长,干啥最好?当然是折腾一些算法题了,下面给大家讲几道一行代码就能解决的算法题,当然,我相信这些算法题你都做过,不过就算做过,也是可以看一看滴,毕竟,你当初大概率不是一行代码解决的。 学会了一行代码解决,以后遇到面试官问起的话,就可以装逼了。 一、2 的幂次方 问题描述:判断一个整数 n 是否为 2 的幂次方 对于这道题,常规操作是不断这把这个数除以 2,然后判断是否有余数,直到 ...
C++内存管理(面试版)
C++的内存管理 一、C++内存管理详解 1、内存的分配方式 (a)(a)(a)栈:编译器分配的内存,用来存储函数的局部变量,函数调用结束后则自动释放内存。 (b)(b)(b)堆:程序员用new分配的内存,一般存储指针;如果程序运行结束的时候没有被释放,则操作系统会自动回收。 (c)(c)(c)自由存储区:程序员用malloc分配的内存,使用free来释放内存。 (d)(d)(d)全局/静态存储区
Spring框架|JdbcTemplate介绍
文章目录一、JdbcTemplate 概述二、创建对象的源码分析三、JdbcTemplate操作数据库 一、JdbcTemplate 概述 在之前的web学习中,学习了手动封装JDBCtemplate,其好处是通过(sql语句+参数)模板化了编程。而真正的JDBCtemplete类,是Spring框架为我们写好的。 它是 Spring 框架中提供的一个对象,是对原始 Jdbc API 对象的简单...
为什么说程序员做外包没前途?
之前做过不到3个月的外包,2020的第一天就被释放了,2019年还剩1天,我从外包公司离职了。我就谈谈我个人的看法吧。首先我们定义一下什么是有前途 稳定的工作环境 不错的收入 能够在项目中不断提升自己的技能(ps:非技术上的认知也算) 找下家的时候能找到一份工资更高的工作 如果你目前还年轻,但高不成低不就,只有外包offer,那请往下看。 外包公司你应该...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
【SpringBoot 系列】史上最全的springboot学习教程(会不断更新)
把写过的SpringBoot系列的文章全部整理在此,方便大家学习查看!
终于!疫情之下,第一批企业没能熬住面临倒闭,员工被遣散,没能等来春暖花开!
先来看一个图: 这个春节,我同所有人一样,不仅密切关注这次新型肺炎,还同时关注行业趋势和企业。在家憋了半个月,我选择给自己看书充电。因为在疫情之后,行业竞争会更加加剧,必须做好未雨绸缪,时刻保持充电。 看了今年的情况,突然想到大佬往年经典语录: 马云:未来无业可就,无工可打,无商可务 李彦宏:人工智能时代,有些专业将被淘汰,还没毕业就失业 马化腾:未来3年将大洗牌,迎21世界以来最大失业潮 王...
昂,我24岁了
24岁的程序员,还在未来迷茫,不知道能不能买得起房子
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
新来个技术总监,禁止我们使用Lombok!
我有个学弟,在一家小型互联网公司做Java后端开发,最近他们公司新来了一个技术总监,这位技术总监对技术细节很看重,一来公司之后就推出了很多"政策",比如定义了很多开发规范、日志规范、甚至是要求大家统一使用某一款IDE。 但是这些都不是我这个学弟和我吐槽的点,他真正和我吐槽的是,他很不能理解,这位新来的技术总监竟然禁止公司内部所有开发使用Lombok。但是又没给出十分明确的,可以让人信服的理由。 于...
立即提问