spark在yarn集群上执行client模式代码

spark的wordcount提交到yarn集群上运行时,出现以下报错:请问有大神知道如何解决吗?

[hadoop00@hadoop02 ~]$ ./spark-submit-wordcount-yarn-client.sh
//下面是执行过程:
19/07/31 17:12:36 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.2.102:4040
19/07/31 17:12:36 INFO spark.SparkContext: Added JAR file:/home/hadoop00/spark-core-1.0-SNAPSHOT-jar-with-dependencies.jar at spark://192.168.2.102:43723/jars/spark-core-1.0-SNAPSHOT-jar-with-dependencies.jar with timestamp 1564564356841
19/07/31 17:12:40 INFO yarn.Client: Requesting a new application from cluster with 0 NodeManagers
19/07/31 17:12:41 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
19/07/31 17:12:41 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
19/07/31 17:12:41 INFO yarn.Client: Setting up container launch context for our AM
19/07/31 17:12:41 INFO yarn.Client: Setting up the launch environment for our AM container
19/07/31 17:12:41 INFO yarn.Client: Preparing resources for our AM container
19/07/31 17:12:45 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
19/07/31 17:12:53 INFO yarn.Client: Uploading resource file:/tmp/spark-59635080-0711-4817-9e3b-b25f528cbbbe/__spark_libs__5797595590401639249.zip -> hdfs://myha01/user/hadoop00/.sparkStaging/application_1564523762236_0001/__spark_libs__5797595590401639249.zip
19/07/31 17:13:07 INFO yarn.Client: Uploading resource file:/tmp/spark-59635080-0711-4817-9e3b-b25f528cbbbe/__spark_conf__627970737981952935.zip -> hdfs://myha01/user/hadoop00/.sparkStaging/application_1564523762236_0001/__spark_conf__.zip
19/07/31 17:13:07 INFO spark.SecurityManager: Changing view acls to: hadoop00
19/07/31 17:13:07 INFO spark.SecurityManager: Changing modify acls to: hadoop00
19/07/31 17:13:07 INFO spark.SecurityManager: Changing view acls groups to: 
19/07/31 17:13:07 INFO spark.SecurityManager: Changing modify acls groups to: 
19/07/31 17:13:07 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop00); groups with view permissions: Set(); users  with modify permissions: Set(hadoop00); groups with modify permissions: Set()
19/07/31 17:13:07 INFO yarn.Client: Submitting application application_1564523762236_0001 to ResourceManager
19/07/31 17:13:08 INFO impl.YarnClientImpl: Submitted application application_1564523762236_0001
19/07/31 17:13:08 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1564523762236_0001 and attemptId None
19/07/31 17:13:09 INFO yarn.Client: Application report for application_1564523762236_0001 (state: ACCEPTED)
19/07/31 17:13:09 INFO yarn.Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1564523805324
         final status: UNDEFINED
         tracking URL: http://hadoop03:8088/proxy/application_1564523762236_0001/
         user: hadoop00
19/07/31 17:13:10 INFO yarn.Client: Application report for application_1564523762236_0001 (state: FAILED)
19/07/31 17:13:10 INFO yarn.Client: 
         client token: N/A
         diagnostics: Application application_1564523762236_0001 failed 2 times due to Error launching appattempt_1564523762236_0001_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. 
This token is expired. current time is 1564564389887 found 1564524406596
Note: System times on machines may be out of sync. Check system time and time zones.
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:123)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1564523805324
         final status: FAILED
         tracking URL: http://hadoop03:8088/cluster/app/application_1564523762236_0001
         user: hadoop00
19/07/31 17:13:10 INFO yarn.Client: Deleted staging directory hdfs://myha01/user/hadoop00/.sparkStaging/application_1564523762236_0001
19/07/31 17:13:10 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
        at p2._01ScalaWordCountRemoteOps$.main(_01ScalaWordCountRemoteOps.scala:21)
        at p2._01ScalaWordCountRemoteOps.main(_01ScalaWordCountRemoteOps.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/07/31 17:13:10 INFO server.AbstractConnector: Stopped Spark@6f2bafef{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/07/31 17:13:10 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.2.102:4040
19/07/31 17:13:10 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
19/07/31 17:13:10 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
19/07/31 17:13:10 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
19/07/31 17:13:10 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
19/07/31 17:13:10 INFO cluster.YarnClientSchedulerBackend: Stopped
19/07/31 17:13:10 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/07/31 17:13:10 INFO memory.MemoryStore: MemoryStore cleared
19/07/31 17:13:10 INFO storage.BlockManager: BlockManager stopped
19/07/31 17:13:10 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/07/31 17:13:10 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running
19/07/31 17:13:10 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/07/31 17:13:10 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
        at p2._01ScalaWordCountRemoteOps$.main(_01ScalaWordCountRemoteOps.scala:21)
        at p2._01ScalaWordCountRemoteOps.main(_01ScalaWordCountRemoteOps.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/07/31 17:13:10 INFO util.ShutdownHookManager: Shutdown hook called
19/07/31 17:13:10 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-59635080-0711-4817-9e3b-b25f528cbbbe

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
spark on yarn 如何修改yarn监控界面的user
我在做一个windows环境下的服务器用于 提交spark任务到yarn集群上的工作,但这样一来,每次提交的任务都是那台服务器的名称,如何动态修改 该值??
spark2.1.0运行spark on yarn的client模式一定需要自行编译吗?
如题,我用官网下载的预编译版本,每次运行client模式都会报错 ERROR SparkContext: Error initializing SparkContext. org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master. 但是用cluster模式就能正常运行。网上看了一些~请问有这个说法吗? 采用的是java1.8,hadoop2.7.3,spark2.1.0。 请指导一下,谢谢!
spark on yarn 资源调度问题
为什么spark在yarn上运行时,资源使用情况如下图:有一个结点的资源使用很少。 ![图片说明](https://img-ask.csdn.net/upload/201912/17/1576545833_408284.png) 我的集群配置,一共六台电脑,一台运行驱动器,五台执行器,均为8g 8核, spark启动如下: ``` pyspark --master yarn --num-executors 4 --executor-memory 6g --executor-cores 6 --conf spark.default.parallelism=50 --deploy-mode client ``` 同时我设置--num-executors为4为什么会有5个contains,且不管--num-executors设置为多少,contaiers总是会+1
web提交Spark到yarn方案。。求助
现在要求是web后台调用spark,spark在yarn上跑。(这里web服务器和yarn在不同的服务器上)。。我能想到的方法就是spark打成jar包,web后台去执行脚本/命令。。。有没有不用打jar包的方法啊。。。。求助
Spark运行模式Spark on yarn 和Spark on Mesos性能方面有什么区别吗
Spark运行模式Spark on yarn 和Spark on Mesos性能方面有什么区别吗
关于Spark on Yarn运行WordCount的问题
**运行wordcount程序是,一直提示以下的内容,yarnAppState状态一直没有变成running:** 14/05/13 15:05:25 INFO yarn.Client: Application report from ASM: application identifier: application_1399949387820_0008 appId: 8 clientToAMToken: null appDiagnostics: appMasterHost: N/A appQueue: default appMasterRpcPort: 0 appStartTime: 1399964104011 yarnAppState: ACCEPTED distributedFinalState: UNDEFINED appTrackingUrl: master:8088/proxy/application_1399949387820_0008/ appUser: hadoop **我有3个虚拟机,master内存1g,slave内存512m,我的运行脚本如下:** export YARN_CONF_DIR=/home/hadoop/hadoop-2.2.0/etc/hadoop SPARK_JAR=/home/hadoop/spark-0.9.0-incubating-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar \ ./spark-class org.apache.spark.deploy.yarn.Client \ --jar spark-wordcount-in-scala.jar \ --class WordCount \ --args yarn-standalone \ --args hdfs://master:6000/input \ --args hdfs://master:6000/output \ --num-workers 1 \ --master-memory 512m \ --worker-memory 512m \ --worker-cores 1 请各位大神帮帮忙!!!
Spark on Yarn ShedulerDelay时间怎样优化?
![图片说明](https://img-ask.csdn.net/upload/201508/18/1439894897_308498.jpg) 我以yarn-cluster模式执行了SparkPi的例子,用了3个节点,传入的参数(分区数量)也为3 schedulerDelay的解释是从sheduler到executor的Task传输时间,和Task结果从executor到sheduler的传输时间。可以通过减小task的大小和task结果的大小来缩短时间,我不是很理解。 求各位大大解释一下怎样减小task的大小,还有task结果怎样影响的sheduler。 最好能给出一个优化前后的代码和WebUI界面
spark on yarn 8088界面只有一个程序是Running状态,其他都是ACCEPTED状态
请教:我的程序是只能在8088界面显示一个AppId 是running状态,其他都是ACCEPTED状态。尝试修改了spark-env以及yarn-site.xml,spark-defaults.conf,以及capacity-scheduler.xml都没有什么作用。 1. 1.1 vim yarn-site.xml scp -r /usr/local/hadoop-2.7.1/etc/hadoop/yarn-site.xml root@xiuba112:/usr/local/hadoop-2.7.1/etc/hadoop/ <property> <name>yarn.nodemanager.aux-services.spark_shuffle.class</name> <value>org.apache.spark.network.yarn.YarnShuffleService</value> </property> <property> <name>spark.shuffle.service.port</name> <value>7337</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> 1.2 添加依赖的jar包 cp /usr/local/spark-2.2.1-bin-hadoop2.7/yarn/spark-2.2.1-yarn-shuffle.jar /usr/local/hadoop-2.7.1/share/hadoop/yarn/lib/ 拷贝“${SPARK_HOME}/lib/spark-1.3.0-yarn-shuffle.jar”到“${HADOOP_HOME}/share/hadoop/yarn/lib/”目录下。 note:高版本没有lib目录,有jars目录,比如说spark-2.0.2-yarn-shuffle.jar就在${SPARK_HOME}/yarn目录下,将其复制到${HADOOP_HOME}/share/hadoop/yarn/lib目录下。 1.3 重启NodeManager进程 2. scp -r /usr/local/spark-2.2.1-bin-hadoop2.7/conf/spark-defaults.conf root@xiuba112:/usr/local/spark-2.2.1-bin-hadoop2.7/conf/ 在“spark-defaults.conf”中必须添加如下配置项: spark.shuffle.service.enabled=true spark.shuffle.service.port=7337 1和2不能解决问题 3. vim /usr/local/spark-2.2.1-bin-hadoop2.7/conf/spark-env.sh conf/spark-env.sh中,同时在节点 /etc/profile中也添加一行 export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop 3不能解决问题。
spark-submit提交application后执行的是之前的代码,不是最新代码
spark-submit提交application后执行的是之前的代码,不是最新代码: 通过yarn进行提交application spark-submit --master yarn --queue myQueue --class com.xxx.MyClass /myApp.jar 之前修改代码生成jar后,提交执行的是最新修改后的代码,但从今天开始提交后执行的是之前的历史版本代码,不是最新的代码,代码被缓存了,这个如获清除? 提交命令写成下面这样: spark-submit --master yarn --queue myQueue --class com.xxx.MyClass /NoExist.jar 其中NoExist.jar实际并不存在,日志显示NoExist.jar不存在skipping,但还是走到MyClass中了,执行了代码逻辑,说明MyClass 一直存在,怎么才能清除掉,能够执行最新的代码?
spark集群运行错误:15
这是一个仿照网上例子,自己学习测试的。用scala编写写了一个wordCount的例子,在myEclipse上是可以运行的,并可以得出结果。现在将例子导出jar包,然后放到hadoop集群上运行,出现如下错误:Stack trace: ExitCodeException exitCode=15跪求各路大神帮忙, 这个问题已经困扰我一个星期了,网上也找了很久,没找到解决办法。没有多少分了 。。。非常感谢!!!环境: hadoop2.6.2 spark2.2 jdk1.8 scala2.2hadoop集群应该是没有问题的,浏览器可以打开50070的页面下面是spark on yarn的环境:export JAVA_HOME=/usr/local/src/jdk1.8.0_144export SPARK_MASTER_IP=node1export SPARK_MASTER_PORT=7077export SPARK_WORKER_CORES=1export SPARK_WORKER_INSTANCES=1export SPARK_WORKER_MEMORY=1gexport SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node1:2181,node2:2181,node3:2181"export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoopexport SPARK_HOME=/usr/local/src/spark-2.2.0-bin-hadoop2.6export SPARK_JAR=/usr/local/src/spark-2.2.0-bin-hadoop2.6/jars/*.jarexport PATH=$SPARK_HOME/bin:$PATHWordCount例子:object wc { def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("wc") val sc = new SparkContext(conf) val text = sc.textFile("test.txt") val words = text.flatMap(line => line.split(" ")) val pairs = words.map(word => (word, 1)) val results = pairs.reduceByKey(_+_).map(tuple => (tuple._2 , tuple._1 )) val sorted = results.sortByKey(false).map(tuple => (tuple._2 , tuple._1 )) sorted.foreach(x => println(x)) sc.stop() }}错误信息:Application Attempt State: FAILEDAM Container: container_1507729080248_0001_01_000001Node: N/ATracking URL: HistoryDiagnostics Info: AM Container for appattempt_1507729080248_0001_000001 exited with exitCode: 15For more detailed output, check application tracking page:http://node1:8088/proxy/application_1507729080248_0001/Then, click on links to logs of each attempt.Diagnostics: Exception from container-launch.Container id: container_1507729080248_0001_01_000001Exit code: 15Stack trace: ExitCodeException exitCode=15:at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)at org.apache.hadoop.util.Shell.run(Shell.java:455)at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)Container exited with a non-zero exit code 15Failing this attempt![图片](https://img-ask.csdn.net/upload/201710/12/1507764756_681945.jpg)![图片](https://img-ask.csdn.net/upload/201710/12/1507764760_746436.jpg)![图片](https://img-ask.csdn.net/upload/201710/12/1507764811_639764.jpg)
spark-sql --master yarn-client登录不成功,求教大神。
高可靠集群,hive也都配置好的,只是使用命令./bin/spark-sql或者spark-sql --master local或者spark-sql --master spark://172.16.4.169:7077都可以正常登录spark-sql ,也可以查看表格操作,但是如果使用命令./bin/ spark-sql --master yarn-client则无法正常登录,不报错,但也一直卡在登录界面,如图所示,不知道问题出在哪里?求大神指教 ![图片说明](https://img-ask.csdn.net/upload/201610/18/1476801025_876930.jpg)
Ansj+yarn自定义词包读取不到
最近有一个需求是使用ansj分词后根据起词性进行分类,当然,词性是自定义词典的词性。然而当将本地测试无误的项目打成jar包提交到yarn上运行时可能是因为某些从机读取了词典,某些没有读取。就会导致只能得到一半的正确结果。困了几天了。求拯救
hive运行insert语句在on yarn的情况下报错,开启本地模式后就好了,报错如下:
``` hive> insert into test values('B',2); Query ID = root_20191114105642_8cc05952-0497-4eff-893e-af6de8f05c6e Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator 19/11/14 10:56:43 INFO client.RMProxy: Connecting to ResourceManager at cloudera/37.64.0.71:8032 19/11/14 10:56:43 INFO client.RMProxy: Connecting to ResourceManager at cloudera/37.64.0.71:8032 java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:251) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:444) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1328) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:836) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:772) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227) Caused by: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75) at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:284) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy43.submitApplication(Unknown Source) at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:290) at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:297) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 35 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499) at org.apache.hadoop.ipc.Client.call(Client.java:1445) at org.apache.hadoop.ipc.Client.call(Client.java:1355) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy42.submitApplication(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:281) ... 48 more Job Submission failed with exception 'java.io.IOException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) ``` # 内存最大只有6G,他非要申请15G,这个问题该如何处理, # 求助各位大佬!!!
flink on yarn 模式下出现提交任务失败
我在提交任务的时候,只能提交有限个任务,![图片说明](https://img-ask.csdn.net/upload/201909/24/1569320809_222117.png) ![图片说明](https://img-ask.csdn.net/upload/201909/24/1569320989_140951.png) 这yarn集群里的资源还是很足的呀,为什么我提交一个很简单的任务都会上面这样的错呢
Flink on yarn 提交任务 文件不存在
Application application_1577461148853_0001 failed 1 times due to AM Container for appattempt_1577461148853_0001_000001 exited with exitCode: -1000 For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1577461148853_0001Then, click on links to logs of each attempt. Diagnostics: File file:/home/chenjia/.flink/application_1577461148853_0001/application_1577461148853_0001-flink-conf.yaml1177367680042949062.tmp does not exist java.io.FileNotFoundException: File file:/home/chenjia/.flink/application_1577461148853_0001/application_1577461148853_0001-flink-conf.yaml1177367680042949062.tmp does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Failing this attempt. Failing the application.
hbase与YARN资源管理的关系
hbase与YARN资源管理有什么关系,如何管控?hbase如何使用集群资源?
Flink on yarn 提交任务 文件不存在 不知道是为什么
The program finished with the following exception: org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn session cluster at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:420) at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:259) at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044) at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120) Caused by: org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment. Diagnostics from YARN: Application application_1577490600420_0001 failed 1 times due to AM Container for appattempt_1577490600420_0001_000001 exited with exitCode: -1000 For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1577490600420_0001Then, click on links to logs of each attempt. Diagnostics: File file:/home/chenjia/.flink/application_1577490600420_0001/application_1577490600420_0001-flink-conf.yaml1025360586633443671.tmp does not exist java.io.FileNotFoundException: File file:/home/chenjia/.flink/application_1577490600420_0001/application_1577490600420_0001-flink-conf.yaml1025360586633443671.tmp does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Failing this attempt. Failing the application.
YARN中CPU相关配置参数方法疑问
在YARN中,CPU相关配置参数如下: • yarn.nodemanager.resource.cpu-vcores :表示该节点上YARN可使用的虚拟CPU个数,默认是8,注意,目前推荐将该值设值为与物理CPU核数数目相同。如果你的节点CPU核数不够8个,则需要调减小这个值,而YARN不会智能的探测节点的物理CPU总数。 • yarn.scheduler.minimum-allocation-vcores :单个任务可申请的最小虚拟CPU个数,默认是1,如果一个任务申请的CPU个数少于该数,则该对应的值改为这个数。 • yarn.scheduler.maximum-allocation-vcores :单个任务可申请的最多虚拟CPU个数,默认是32。 对于一个CPU核数较多的集群来说,上面的默认配置显然是不合适的,在我的测试集群中,4个节点每个机器CPU核数为32,可以配置为: <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>32</value> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>128</value> </property> 上面说法是否正确? 例如 物理10台主机,每台CPU30C, yarn.nodemanager.resource.cpu-vcores 设置29? yarn.scheduler.minimum-allocation-vcores 默认1 yarn.scheduler.maximum-allocation-vcore 设置300? 该如何设置?
java 提交spark任务,出现ExitCodeException exitCode=10的错误
如题,本地java调试提交spark任务到yarn,出现ExitCodeException exitCode=10的错误 错误截图 ![图片说明](https://img-ask.csdn.net/upload/201705/08/1494209240_94499.png) 代码截图 ![图片说明](https://img-ask.csdn.net/upload/201705/08/1494209424_371796.png) ![图片说明](https://img-ask.csdn.net/upload/201705/08/1494209433_890826.png)
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
JDK12 Collectors.teeing 你真的需要了解一下
前言在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
GitHub标星近1万:只需5秒音源,这个网络就能实时“克隆”你的声音
作者 | Google团队译者 | 凯隐编辑 | Jane出品 | AI科技大本营(ID:rgznai100)本文中,Google 团队提出了一种文本语音合成(text to speech)神经系统,能通过少量样本学习到多个不同说话者(speaker)的语音特征,并合成他们的讲话音频。此外,对于训练时网络没有接触过的说话者,也能在不重新训练的情况下,仅通过未知说话者数秒的音频来合成其讲话音频,即网
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
【管理系统课程设计】美少女手把手教你后台管理
【后台管理系统】URL设计与建模分析+项目源码+运行界面 栏目管理、文章列表、用户管理、角色管理、权限管理模块(文章最后附有源码) 一、这是一个什么系统? 1.1 学习后台管理系统的原因 随着时代的变迁,现如今各大云服务平台横空出世,市面上有许多如学生信息系统、图书阅读系统、停车场管理系统等的管理系统,而本人家里就有人在用烟草销售系统,直接在网上完成挑选、购买与提交收货点,方便又快捷。 试想,
4G EPS 第四代移动通信系统
目录 文章目录目录4G EPSEPS 的架构EPS 的参考模型E-UTRANUEeNodeBEPCMME(移动性控制处理单元)S-GW(E-RAB 无线访问承载接入点)P-GW(PDN 接入点)HSS(用户认证中心)PCRF(计费规则与策略)SPR(用户档案)OCS(在线计费)OFCS(离线计费)接口类型Uu 接口(空中接口,UE 和 AN 之间)S1 接口(AN 和 CN 之间)S1-U 接口和
相关热词 c# 图片上传 c# gdi 占用内存 c#中遍历字典 c#控制台模拟dos c# 斜率 最小二乘法 c#进程延迟 c# mysql完整项目 c# grid 总行数 c# web浏览器插件 c# xml 生成xsd
立即提问

相似问题

1
spark-submit提交application后执行的是之前的代码,不是最新代码
1
用eclipse连接虚拟机hadoop集群执行MapReduce程序,但是报以下错误,请问如何解决?
2
react中自定义组件importl路径正确,代码也不报错但启动和打包都提示我的那个组件找不到不能被解析
1
spark on yarn 8088界面只有一个程序是Running状态,其他都是ACCEPTED状态
1
SparkStreaming程序报错,yarn模式,求解答!
0
搭建Hadoop Yarn on Docker环境,总是报Container image must not be null ?
2
flink在hadoop yarn运行出错,报相应的jar找不到(self4j)
0
Win10上启动Hadoop报 Error: Could not find or load main class PC
0
spark提交作业申请到的资源分配不均匀问题
0
spark-shell --master spark://ip:7337 的方式运行 报错
1
hadoop集群start-yarn.sh报错,目测是jdk的原因,求指导
2
vscode live Server插件失效
1
com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 232?
1
create-react-app 报错无法创建项目,Extracting tar content of undefined failed
0
hive执行sql报错【The value of property yarn.resourcemanager.zk-address must not be null】
1
Exception in thread "main" java.lang.NoClassDefFoundError:
1
小白求助 :hadoop集群slaves上的NodeManager进程自动断
1
Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error,同时无法put文件到hdfs
0
electron-builder 打包angular项目
3
yarn 指令执行报错 求教!