hadoop3.0 运行yarn jar 3.0.0-alpha2.jar pi 10 100 2C

2017-05-17 19:07:12,789 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-brody/nm-local-dir/nmPrivate/container_1495017112106_0012_01_000001.tokens
2017-05-17 19:07:12,790 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user brody
2017-05-17 19:07:12,793 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-brody/nm-local-dir/nmPrivate/container_1495017112106_0012_01_000001.tokens to /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012/container_1495017112106_0012_01_000001.tokens
2017-05-17 19:07:12,794 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012 = file:/tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012
2017-05-17 19:07:12,843 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2017-05-17 19:07:13,178 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1495017112106_0012_01_000001 transitioned from LOCALIZING to SCHEDULED
2017-05-17 19:07:13,178 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler: Starting container [container_1495017112106_0012_01_000001]
2017-05-17 19:07:13,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1495017112106_0012_01_000001 transitioned from SCHEDULED to RUNNING
2017-05-17 19:07:13,352 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1495017112106_0012_01_000001
2017-05-17 19:07:13,359 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012/container_1495017112106_0012_01_000001/default_container_executor.sh]
2017-05-17 19:07:13,686 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1495017112106_0012_01_000001 is : 1
2017-05-17 19:07:13,686 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1495017112106_0012_01_000001 and exit code: 1
ExitCodeException exitCode=1:

2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_1495017112106_0012_02_000001 from application application_1495017112106_0012
2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1495017112106_0012_02_000001
2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1495017112106_0012
2017-05-17 19:07:16,405 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1495017112106_0012_02_000001]
2017-05-17 19:07:16,408 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1495017112106_0012 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1495017112106_0012
2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1495017112106_0012 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1495017112106_0012, with delay of 10800 seconds
2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hadoop 2.2运行wordcount报错
hadoop 2.2 + jdk1.7 运行wordcount例子 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /word /ws 报错: org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1449733659077_0001_m_000000_0: Error: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.InputSplit 请各位高手指点
Hadoop2.4.0环境下HBase-0.9.60-hadoo2版本冲突问题
我的Hadoop环境是Hadoop2.4.0,HBase是HBase-0.9.60-hadoo2,今天使用HBase API编写了一个程序,运行的时候曝下面的错误: 2014-09-01 18:16:00,247 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-09-01 18:16:00,283 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(303)) - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76) at org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1514) at org.apache.hadoop.hbase.zookeeper.ZKConfig.makeZKProps(ZKConfig.java:113) at org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig.java:265) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:134) at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1710) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:806) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247) at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:183) at cn.haha.HBase.HBaseApp1.main(HBaseApp1.java:26) 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:host.name=Admin-PC 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.version=1.7.0_65 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.vendor=Oracle Corporation 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.home=C:\workDir\jdk7u65\jre 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.class.path=E:\workDir\workspace_eclipse\HBase-0.96\bin;E:\workDir\workspace_eclipse\HBase-0.96\lib\activation-1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\aopalliance-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\asm-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\avro-1.7.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-1.7.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-core-1.8.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-cli-1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-codec-1.7.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-collections-3.2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-compress-1.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-configuration-1.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-daemon-1.0.13.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-digester-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-el-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-httpclient-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-io-2.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-lang-2.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-logging-1.1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-math-2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-net-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\findbugs-annotations-1.3.9-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\gmbal-api-only-3.0.0-b023.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-framework-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-server-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-servlet-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-rcm-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guava-12.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-servlet-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-annotations-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-auth-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-app-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-core-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-shuffle-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-api-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-nodemanager-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hamcrest-core-1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-client-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-examples-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop2-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-prefix-tree-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-protocol-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-shell-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-testing-util-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-thrift-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\htrace-core-2.04.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpclient-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpcore-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-core-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-jaxrs-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-mapper-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-xc-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jamon-runtime-2.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-compiler-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-runtime-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.inject-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-api-3.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-api-2.2.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-impl-2.2.3-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-client-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-core-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-guice-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-json-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-server-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-core-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jets3t-0.6.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jettison-1.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-sslengine-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-util-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jruby-complete-1.6.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsch-0.1.42.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-api-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsr305-1.3.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\junit-4.11.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\libthrift-0.9.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\log4j-1.2.17.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\management-api-3.0.0-b012.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\metrics-core-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\netty-3.6.6.Final.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\paranamer-2.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\protobuf-java-2.5.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\servlet-api-2.5-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-api-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-log4j12-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\snappy-java-1.0.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xmlenc-0.52.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xz-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\zookeeper-3.4.5.jar 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.library.path=C:\workDir\jdk7u65\bin;C:\windows\Sun\Java\bin;C:\windows\system32;C:\windows;C:/workDir/jdk7u65/bin/../jre/bin/client;C:/workDir/jdk7u65/bin/../jre/bin;C:/workDir/jdk7u65/bin/../jre/lib/i386;C:\Program Files (x86)\Common Files\NetSarang;C:\workDir\jdk7u65\bin;E:\workDir\apache-tomcat-7.0.55;E:\workDir\apache-tomcat-7.0.55;%CATALINA_HOME%\common\lib\common\lib\bin;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\Lenovo\Fingerprint Manager Pro\;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x86;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x64;C:\Program Files (x86)\IDM Computer Solutions\UltraEdit\;E:\workDir\eclipse-indigo-3.7.2;;. 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\ 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA> 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Windows 7 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=x86 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=6.1 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=C:\Users\Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=E:\workDir\workspace_eclipse\HBase-0.96 2014-09-01 18:16:00,299 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,326 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,328 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,329 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,335 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001c, negotiated timeout = 40000 2014-09-01 18:16:00,472 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,473 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,478 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001d, negotiated timeout = 40000 2014-09-01 18:16:00,499 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2014-09-01 18:16:00,817 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001d closed 2014-09-01 18:16:00,817 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down 2014-09-01 18:16:01,288 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:01,290 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:01,290 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:01,291 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:01,294 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001e, negotiated timeout = 40000 2014-09-01 18:16:01,304 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001e closed 2014-09-01 18:16:01,304 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
hadoop3.0.0不会生成_success文件吗
如题。hadoop3.0.0不会生成_success文件吗?????????????????
hadoop2.2.0集群rm配置了HA,但nodemanager无法与resourcemanager通信
yarn-site.xml: <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>11.24.88.242</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>11.24.88.244</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>11.20.26.6:2181,11.20.26.2:2181,11.20.26.3:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> 在yarn.site.xml中MR配置了HA,但一直报错,datanode一直与 0.0.0.0:8031通信,却不与MRtong'x: 2019-08-13 13:33:26,799 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From hadoop7/11.20.200.197 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:181) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:199) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:339) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:386) Caused by: java.net.ConnectException: Call From hadoop7/11.20.200.197 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1351) at org.apache.hadoop.ipc.Client.call(Client.java:1300) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy23.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at $Proxy24.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:238) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:175) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642) at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399) at org.apache.hadoop.ipc.Client.call(Client.java:1318)
[Hadoop3.0.0]Could not find YarnChild
Could not find or load main class org.apache.hadoop.mapred.YarnChild 在使用hadoop3.0.0中,所有东西都配完,也启动起来了。 但运行hadoop-mapreduce-examples-3.0.0.jar的时候开始报错(运行自己的程序也是) ![报错信息](https://img-ask.csdn.net/upload/201802/06/1517908041_839845.png) 下面是我的配置信息 <!-- core-site.xml --> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoopdata</value> </property> <!-- hdfs-site.xml --> <property> <name>dfs.replication</name> <value>3</value> </property> <!-- yarn-site.xml --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <!-- mapred-site.xml --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME</value> </property> export HADOOP_HOME=/data/hadoop-3.0.0 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH 没有配置HADOOP_CLASSPATH。 百度、Google、stackover挺久都找不到答案,希望大神们能解答~
ubuntu下hadoop-2.6.0测试用例运行失败
Results : Failed tests: TestTableMapping.testClearingCachedMappings:144 expected:</[rack1]> but was:</[default-rack]> TestTableMapping.testTableCaching:79 expected:</[rack1]> but was:</[default-rack]> TestTableMapping.testResolve:56 expected:</[rack1]> but was:</[default-rack]> TestDecayRpcScheduler.testAccumulate:136 expected:<3> but was:<2> TestDecayRpcScheduler.testPriority:203 expected:<2> but was:<1> Tests run: 2723, Failures: 5, Errors: 0, Skipped: 91 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................ SUCCESS [4.300s] [INFO] Apache Hadoop Project POM ......................... SUCCESS [2.250s] [INFO] Apache Hadoop Annotations ......................... SUCCESS [7.805s] [INFO] Apache Hadoop Assemblies .......................... SUCCESS [1.006s] [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [8.227s] [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [9.390s] [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [22.836s] [INFO] Apache Hadoop Auth ................................ SUCCESS [40.704s] [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [4.181s] [INFO] Apache Hadoop Common .............................. FAILURE [27:26.889s] [INFO] Apache Hadoop NFS ................................. SKIPPED [INFO] Apache Hadoop KMS ................................. SKIPPED [INFO] Apache Hadoop Common Project ...................... SKIPPED [INFO] Apache Hadoop HDFS ................................ SKIPPED [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SKIPPED [INFO] hadoop-yarn ....................................... SKIPPED [INFO] hadoop-yarn-api ................................... SKIPPED [INFO] hadoop-yarn-common ................................ SKIPPED [INFO] hadoop-yarn-server ................................ SKIPPED [INFO] hadoop-yarn-server-common ......................... SKIPPED [INFO] hadoop-yarn-server-nodemanager .................... SKIPPED [INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED [INFO] hadoop-yarn-server-applicationhistoryservice ...... SKIPPED [INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED [INFO] hadoop-yarn-server-tests .......................... SKIPPED [INFO] hadoop-yarn-client ................................ SKIPPED [INFO] hadoop-yarn-applications .......................... SKIPPED [INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SKIPPED [INFO] hadoop-yarn-site .................................. SKIPPED [INFO] hadoop-yarn-registry .............................. SKIPPED [INFO] hadoop-yarn-project ............................... SKIPPED [INFO] hadoop-mapreduce-client ........................... SKIPPED [INFO] hadoop-mapreduce-client-core ...................... SKIPPED [INFO] hadoop-mapreduce-client-common .................... SKIPPED [INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED [INFO] hadoop-mapreduce-client-app ....................... SKIPPED [INFO] hadoop-mapreduce-client-hs ........................ SKIPPED [INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED [INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED [INFO] Apache Hadoop MapReduce Examples .................. SKIPPED [INFO] hadoop-mapreduce .................................. SKIPPED [INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED [INFO] Apache Hadoop Distributed Copy .................... SKIPPED [INFO] Apache Hadoop Archives ............................ SKIPPED [INFO] Apache Hadoop Rumen ............................... SKIPPED [INFO] Apache Hadoop Gridmix ............................. SKIPPED [INFO] Apache Hadoop Data Join ........................... SKIPPED [INFO] Apache Hadoop Ant Tasks ........................... SKIPPED [INFO] Apache Hadoop Extras .............................. SKIPPED [INFO] Apache Hadoop Pipes ............................... SKIPPED [INFO] Apache Hadoop OpenStack support ................... SKIPPED [INFO] Apache Hadoop Amazon Web Services support ......... SKIPPED [INFO] Apache Hadoop Client .............................. SKIPPED [INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED [INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED [INFO] Apache Hadoop Tools Dist .......................... SKIPPED [INFO] Apache Hadoop Tools ............................... SKIPPED [INFO] Apache Hadoop Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 29:15.906s [INFO] Finished at: Tue Jun 09 01:10:59 CST 2015 [INFO] Final Memory: 65M/202M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-common: There are test failures. [ERROR] [ERROR] Please refer to /home/cj/workspace/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/target/surefire-reports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hadoop-common
maven3.3.9编译hadoop2.6.5报错 帮忙解决问题
[INFO] Building Apache Hadoop Main 2.6.5 [INFO] ------------------------------------------------------------------------ Downloading: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Could not validate integrity of download from http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml: Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml Downloaded: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml (99 KB at 11.8 KB/sec) [WARNING] The metadata /root/.m2/repository/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata-ibiblio.org.xml is invalid: end tag name </body> must match start tag name <hr> from line 888 (position: START_TAG seen ... 08-Nov-2014 19:04 207\r\n</pre><hr></body>... @888:18) [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................. FAILURE [ 8.416 s] [INFO] Apache Hadoop Build Tools .......................... SKIPPED [INFO] Apache Hadoop Project POM .......................... SKIPPED [INFO] Apache Hadoop Annotations .......................... SKIPPED [INFO] Apache Hadoop Assemblies ........................... SKIPPED [INFO] Apache Hadoop Project Dist POM ..................... SKIPPED [INFO] Apache Hadoop Maven Plugins ........................ SKIPPED [INFO] Apache Hadoop MiniKDC .............................. SKIPPED [INFO] Apache Hadoop Auth ................................. SKIPPED [INFO] Apache Hadoop Auth Examples ........................ SKIPPED [INFO] Apache Hadoop Common ............................... SKIPPED [INFO] Apache Hadoop NFS .................................. SKIPPED [INFO] Apache Hadoop KMS .................................. SKIPPED [INFO] Apache Hadoop Common Project ....................... SKIPPED [INFO] Apache Hadoop HDFS ................................. SKIPPED [INFO] Apache Hadoop HttpFS ............................... SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED [INFO] Apache Hadoop HDFS Project ......................... SKIPPED [INFO] hadoop-yarn ........................................ SKIPPED [INFO] hadoop-yarn-api .................................... SKIPPED [INFO] hadoop-yarn-common ................................. SKIPPED [INFO] hadoop-yarn-server ................................. SKIPPED [INFO] hadoop-yarn-server-common .......................... SKIPPED [INFO] hadoop-yarn-server-nodemanager ..................... SKIPPED [INFO] hadoop-yarn-server-web-proxy ....................... SKIPPED [INFO] hadoop-yarn-server-applicationhistoryservice ....... SKIPPED [INFO] hadoop-yarn-server-resourcemanager ................. SKIPPED [INFO] hadoop-yarn-server-tests ........................... SKIPPED [INFO] hadoop-yarn-client ................................. SKIPPED [INFO] hadoop-yarn-applications ........................... SKIPPED [INFO] hadoop-yarn-applications-distributedshell .......... SKIPPED [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SKIPPED [INFO] hadoop-yarn-site ................................... SKIPPED [INFO] hadoop-yarn-registry ............................... SKIPPED [INFO] hadoop-yarn-project ................................ SKIPPED [INFO] hadoop-mapreduce-client ............................ SKIPPED [INFO] hadoop-mapreduce-client-core ....................... SKIPPED [INFO] hadoop-mapreduce-client-common ..................... SKIPPED [INFO] hadoop-mapreduce-client-shuffle .................... SKIPPED [INFO] hadoop-mapreduce-client-app ........................ SKIPPED [INFO] hadoop-mapreduce-client-hs ......................... SKIPPED [INFO] hadoop-mapreduce-client-jobclient .................. SKIPPED [INFO] hadoop-mapreduce-client-hs-plugins ................. SKIPPED [INFO] Apache Hadoop MapReduce Examples ................... SKIPPED [INFO] hadoop-mapreduce ................................... SKIPPED [INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED [INFO] Apache Hadoop Distributed Copy ..................... SKIPPED [INFO] Apache Hadoop Archives ............................. SKIPPED [INFO] Apache Hadoop Rumen ................................ SKIPPED [INFO] Apache Hadoop Gridmix .............................. SKIPPED [INFO] Apache Hadoop Data Join ............................ SKIPPED [INFO] Apache Hadoop Ant Tasks ............................ SKIPPED [INFO] Apache Hadoop Extras ............................... SKIPPED [INFO] Apache Hadoop Pipes ................................ SKIPPED [INFO] Apache Hadoop OpenStack support .................... SKIPPED [INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED [INFO] Apache Hadoop Client ............................... SKIPPED [INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED [INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED [INFO] Apache Hadoop Tools Dist ........................... SKIPPED [INFO] Apache Hadoop Tools ................................ SKIPPED [INFO] Apache Hadoop Distribution ......................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 06:03 min [INFO] Finished at: 2018-06-23T11:25:17+08:00 [INFO] Final Memory: 27M/69M [INFO] ------------------------------------------------------------------------ [ERROR] Error resolving version for plugin 'org.apache.maven.plugins:maven-javadoc-plugin' from the repositories [local (/root/.m2/repository), ibiblio.org (http://mirrors.ibiblio.org/pub/mirrors/maven2)]: Plugin not found in any plugin repository -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginVersionResolutionException You have new mail in /var/spool/mail/root
求教,hadoop-2.2.0升级hadoop-2.6.0。
最近需要升级hadoop,从hadoop-2.2.0升级到hadoop-2.6.0,根据http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade 提示的来,第一步:./bin/hdfs dfsadmin -rollingUPgrade prepare 就出现了:PREPARE rolling upgrade ... rollingUpgrade: Unknown method rollingUpgrade called on org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
hadoop2.5.0-cdh5.3.1, 如何选择相配的spark?
我装hadoop2.5.0-cdh5.3.1 + spark1.2.1-bin-hadoop2.4.tgz,发现许多问题。是不是版本不兼容。 请朋友们帮助! 另外,如何编译 jar? 因为未发现build or sbt path.
hadoop2.7.1运行wordcount报错1639
具体日志如下,麻烦帮忙看下,谢谢 Application application_1450887330517_0001 failed 2 times due to AM Container for appattempt_1450887330517_0001_000002 exited with exitCode: 1639 For more detailed output, check application tracking page:http://Luke-PC:8088/cluster/app/application_1450887330517_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1450887330517_0001_02_000001 Exit code: 1639 Exception message: Incorrect command line arguments. TaskExit: error (1639): ??????????????????????? Windows Installer ? SDK? Stack trace: ExitCodeException exitCode=1639: Incorrect command line arguments. TaskExit: error (1639): ??????????????????????? Windows Installer ? SDK? at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) at org.apache.hadoop.util.Shell.run(Shell.java:456) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Shell output: Usage: task create [TASKNAME] [COMMAND_LINE] | task createAsUser [TASKNAME] [USERNAME] [PIDFILE] [COMMAND_LINE] | task isAlive [TASKNAME] | task kill [TASKNAME] task processList [TASKNAME] Creates a new task jobobject with taskname Creates a new task jobobject with taskname as the user provided Checks if task jobobject is alive Kills task jobobject Prints to stdout a list of processes in the task along with their resource usage. One process per line and comma separated info per process ProcessId,VirtualMemoryCommitted(bytes), WorkingSetSize(bytes),CpuTime(Millisec,Kernel+User) Container exited with a non-zero exit code 1639 Failing this attempt. Failing the application.
编译hadoop3.0.2 enforce-banned-dependencies失败
编译hadoop3.0.2 Apache Hadoop Client Packaging Invariants for Test . FAILURE 大神们帮忙看下哪里出错的以及怎么修改 ``` [INFO] Apache Hadoop Client Test Minicluster .............. SUCCESS [02:00 min] [INFO] Apache Hadoop Client Packaging Invariants for Test . FAILURE [ 0.792 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED [INFO] Apache Hadoop Distribution ......................... SKIPPED [INFO] Apache Hadoop Client Modules ....................... SKIPPED [INFO] Apache Hadoop Cloud Storage ........................ SKIPPED [INFO] Apache Hadoop Cloud Storage Project 3.0.2 .......... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 34:30 min [INFO] Finished at: 2018-03-18T06:30:58+08:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce (enforce-banned-dependencies) on project hadoop-client-check-test-invariants: Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed. -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce (enforce-banned-dependencies) on project hadoop-client-check-test-invariants: Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed. at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:213) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:154) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:146) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290) at org.apache.maven.cli.MavenCli.main (MavenCli.java:194) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:497) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356) Caused by: org.apache.maven.plugin.MojoExecutionException: Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed. at org.apache.maven.plugins.enforcer.EnforceMojo.execute (EnforceMojo.java:243) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:208) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:154) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:146) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290) at org.apache.maven.cli.MavenCli.main (MavenCli.java:194) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:497) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356) [ERROR] [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException ```
Hadoop2.2.0搭建过程中namenode初始化报错
HDFS初始化namenode报错,求大神帮帮忙!!! FATAL namenode.NameNode: Exception in namenode join java.lang.ClassCastException: com.sun.org.apache.xerces.internal.dom.DeferredElementNSImpl cannot be cast to org.w3c.dom.Text at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2111) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2001) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1918) at org.apache.hadoop.conf.Configuration.get(Configuration.java:721) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:740) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:965) at org.apache.hadoop.security.Groups.<init>(Groups.java:62) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214) at org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:275) at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:269) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 15/04/13 04:15:01 INFO util.ExitUtil: Exiting with status 1 15/04/13 04:15:02 INFO namenode.NameNode: SHUTDOWN_MSG:
hive运行insert语句在on yarn的情况下报错,开启本地模式后就好了,报错如下:
``` hive> insert into test values('B',2); Query ID = root_20191114105642_8cc05952-0497-4eff-893e-af6de8f05c6e Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator 19/11/14 10:56:43 INFO client.RMProxy: Connecting to ResourceManager at cloudera/37.64.0.71:8032 19/11/14 10:56:43 INFO client.RMProxy: Connecting to ResourceManager at cloudera/37.64.0.71:8032 java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:251) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:444) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1328) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:836) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:772) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227) Caused by: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75) at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:284) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy43.submitApplication(Unknown Source) at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:290) at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:297) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 35 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499) at org.apache.hadoop.ipc.Client.call(Client.java:1445) at org.apache.hadoop.ipc.Client.call(Client.java:1355) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy42.submitApplication(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:281) ... 48 more Job Submission failed with exception 'java.io.IOException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) ``` # 内存最大只有6G,他非要申请15G,这个问题该如何处理, # 求助各位大佬!!!
Azkaban和Hadoop2.5.1集成出现的问题
Using Hadoop from /usr/local/hadoop-suite/hadoop Using Hive from /usr/local/hadoop-suite/hive bin/.. /usr/local/jdk/lib/tools.jar:/usr/local/jdk/lib/dt.jar:bin/../lib/azkaban-common-2.6.4.jar:bin/../lib/azkaban-webserver-2.6.4.jar:bin/../lib/commons-codec-1.9.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-configuration-1.8.jar:bin/../lib/commons-dbcp-1.4.jar:bin/../lib/commons-dbutils-1.5.jar:bin/../lib/commons-email-1.2.jar:bin/../lib/commons-fileupload-1.2.1.jar:bin/../lib/commons-io-2.4.jar:bin/../lib/commons-jexl-2.1.1.jar:bin/../lib/commons-lang-2.6.jar:bin/../lib/commons-logging-1.1.1.jar:bin/../lib/commons-pool-1.6.jar:bin/../lib/data-1.15.7.jar:bin/../lib/gradle-plugins-1.15.7.jar:bin/../lib/guava-13.0.1.jar:bin/../lib/h2-1.3.170.jar:bin/../lib/httpclient-4.2.1.jar:bin/../lib/httpcore-4.2.1.jar:bin/../lib/jackson-core-2.3.2.jar:bin/../lib/jackson-core-asl-1.9.5.jar:bin/../lib/jackson-mapper-asl-1.9.5.jar:bin/../lib/jetty-6.1.26.jar:bin/../lib/jetty-util-6.1.26.jar:bin/../lib/joda-time-2.0.jar:bin/../lib/jopt-simple-4.3.jar:bin/../lib/li-jersey-uri-1.15.7.jar:bin/../lib/log4j-1.2.16.jar:bin/../lib/mail-1.4.5.jar:bin/../lib/mysql-connector-java-5.1.28.jar:bin/../lib/parseq-1.3.7.jar:bin/../lib/pegasus-common-1.15.7.jar:bin/../lib/r2-1.15.7.jar:bin/../lib/restli-common-1.15.7.jar:bin/../lib/restli-server-1.15.7.jar:bin/../lib/servlet-api-2.5.jar:bin/../lib/slf4j-api-1.6.1.jar:bin/../lib/velocity-1.7.jar:bin/../lib/velocity-tools-2.0.jar:bin/../extlib/azkaban-common-2.6.4.jar:bin/../extlib/azkaban-execserver-2.6.4.jar:bin/../extlib/azkaban-webserver-2.6.4.jar:bin/../extlib/commons-cli-1.2.jar:bin/../extlib/hadoop-auth-2.5.1.jar:bin/../extlib/hadoop-common-2.5.1.jar:bin/../extlib/hadoop-hdfs-2.5.1.jar:bin/../extlib/hive-cli-0.13.1.jar:bin/../extlib/hive-common-0.13.1.jar:bin/../extlib/hive-exec-0.13.1.jar:bin/../extlib/jackson-core-asl-1.9.5.jar:bin/../extlib/jackson-mapper-asl-1.9.5.jar:bin/../extlib/log4j-1.2.16.jar:bin/../extlib/protobuf-java-2.5.0.jar:bin/../extlib/servlet-api-2.5.jar:bin/../extlib/slf4j-api-1.6.1.jar:bin/../extlib/slf4j-log4j12-1.6.4.jar:bin/../extlib/velocity-1.7.jar:bin/../extlib/velocity-tools-2.0.jar:bin/../plugins/*/*.jar:/usr/local/hadoop-suite/hadoop/conf:/usr/local/hadoop-suite/hadoop/*:/usr/local/hadoop-suite/hive/conf:/usr/local/hadoop-suite/hive/lib/* 2015/01/21 16:02:33.518 +0800 ERROR [AzkabanWebServer] [Azkaban] Starting Jetty Azkaban Executor... 2015/01/21 16:02:33.937 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.hdfs.HdfsBrowserServlet 2015/01/21 16:02:33.941 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/hdfs/lib/azkaban-hdfs-viewer-2.6.4.jar 2015/01/21 16:02:33.945 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.javaviewer.JavaViewerServlet 2015/01/21 16:02:33.946 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/javaviewer/lib/azkaban-javaviewer-2.6.3.jar 2015/01/21 16:02:33.947 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.reportal.ReportalServlet 2015/01/21 16:02:33.947 +0800 ERROR [AzkabanWebServer] [Azkaban] External library path /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/extlib not found. 2015/01/21 16:02:33.950 +0800 INFO [AzkabanWebServer] [Azkaban] Source jar /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/lib/azkaban-reportal-$%7Bgit.tag%7D.jar Reportal web resources: /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/reportal/web 2015/01/21 16:02:33.953 +0800 ERROR [AzkabanWebServer] [Azkaban] Plugin class azkaban.viewer.jobsummary.JobSummaryServlet 2015/01/21 16:02:33.953 +0800 ERROR [AzkabanWebServer] [Azkaban] External library path /usr/local/hadoop-suite/azkaban-web-2.6.4-old/plugins/viewer/jobsummary/extlib/* not found.
执行jar报错 Hadoop java.io.IOException
[img=http://img.bbs.csdn.net/upload/201703/15/1489518401_142809.png][/img] Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text hadoop jar Hadoop_Demo1.jar /user/myData/ /user/out/ 执行简单jar包 17/03/15 02:52:37 INFO client.RMProxy: Connecting to ResourceManager at s0/192.168.253.130:8032 17/03/15 02:52:37 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 17/03/15 02:52:38 INFO input.FileInputFormat: Total input paths to process : 2 17/03/15 02:52:38 INFO mapreduce.JobSubmitter: number of splits:2 17/03/15 02:52:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489512856623_0004 17/03/15 02:52:39 INFO impl.YarnClientImpl: Submitted application application_1489512856623_0004 17/03/15 02:52:39 INFO mapreduce.Job: The url to track the job: http://s0:8088/proxy/application_1489512856623_0004/ 17/03/15 02:52:39 INFO mapreduce.Job: Running job: job_1489512856623_0004 17/03/15 02:52:50 INFO mapreduce.Job: Job job_1489512856623_0004 running in uber mode : false 17/03/15 02:52:50 INFO mapreduce.Job: map 0% reduce 0% 17/03/15 02:55:18 INFO mapreduce.Job: map 50% reduce 0% 17/03/15 02:55:18 INFO mapreduce.Job: Task Id : attempt_1489512856623_0004_m_000001_0, Status : FAILED Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414) at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.ClassCastException: interface javax.xml.soap.Text at java.lang.Class.asSubclass(Class.java:3404) at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:887) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1004) at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402) ... 9 more Container killed by the ApplicationMaster. 17/03/15 02:55:18 INFO mapreduce.Job: Task Id : attempt_1489512856623_0004_m_000000_0, Status : FAILED Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414) at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.ClassCastException: interface javax.xml.soap.Text at java.lang.Class.asSubclass(Class.java:3404) at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:887) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1004) at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402) ... 9 more 17/03/15 02:55:19 INFO mapreduce.Job: map 0% reduce 0% 17/03/15 02:55:31 INFO mapreduce.Job: Task Id : attempt_1489512856623_0004_m_000000_1, Status : FAILED Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414) at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Ubuntu中关于Hadoop2.6.0的安装
求教: 1.我的程序运行到 bin/hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-sources.jar 时, 发生错误:RunJar jarFile [mainClass] args..., 2.我的程序运行到 /usr/local/hadoop$ org.apache.hadoop.examples.WordCount input output 发生错误: org.apache.hadoop.examples.WordCount:未找到命令 我本来是想在hadoop没有对配置文件进行处理前,进行一下测试,但是出现这种情况,请问怎么处理,谢谢!
hadoop 2.6.0升级到2.6.3
已经根据官网上的方法进行了操作。 1、下载了hadoop2.6.3 2、rollingUpgrade prepare 3、停止standby的namenode,hadoop2.6.3环境中执行rollingUpgrade started 4、切换active和standy的namenode,新的standby的namenode重复3 5、更新datanode 6、finalized操作 在执行3之后会有很多INFO级别的信息打印,是否不需要理会?而且以此方式启动的namenode没有pid(hadoop/pids文件夹下没有namenode)是不是有问题?且最后更新完datanode执行finalized操作提示没有progress rollingUpgrade。 有朋友升级过么?我的升级操作哪里有问题?
hadoop使用yarn运行jar 报java.lang.ClassNotFoundException 找不到类 (找不到的不是主类)
1、写了一个数据分析的程序,用idea打成jar包,依赖jar都打进去了 ![图片说明](https://img-ask.csdn.net/upload/201911/03/1572779664_439750.png) 已经设置了 job.setJarByClass(CountDurationRunner.class); 2、开启hadoop zookeeper 和hbase集群 3、yarn运行jar : $ /opt/module/hadoop-2.7.2/bin/yarn jar ct_analysis.jar runner.CountDurationRunner 报错截图:![图片说明](https://img-ask.csdn.net/upload/201911/03/1572779908_781957.png) CountDurationRunner类代码: ``` package runner; import kv.key.ComDimension; //就是这里第一个就没找到 import kv.value.CountDurationValue; import mapper.CountDurationMapper; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import outputformat.MysqlOutputFormat; import reducer.CountDurationReducer; import java.io.IOException; public class CountDurationRunner implements Tool { private Configuration conf = null; @Override public void setConf(Configuration conf) { this.conf = HBaseConfiguration.create(conf); } @Override public Configuration getConf() { return this.conf; } @Override public int run(String[] args) throws Exception { //得到conf Configuration conf = this.getConf(); //实例化job Job job = Job.getInstance(conf); job.setJarByClass(CountDurationRunner.class); //组装Mapper InputFormat initHbaseInputConfig(job); //组装Reducer outputFormat initHbaseOutputConfig(job); return job.waitForCompletion(true) ? 0 : 1; } private void initHbaseOutputConfig(Job job) { Connection connection = null; Admin admin = null; String tableName = "ns_ct:calllog"; try { connection = ConnectionFactory.createConnection(job.getConfiguration()); admin = connection.getAdmin(); if(!admin.tableExists(TableName.valueOf(tableName))) throw new RuntimeException("没有找到目标表"); Scan scan = new Scan(); //初始化Mapper TableMapReduceUtil.initTableMapperJob( tableName, scan, CountDurationMapper.class, ComDimension.class, Text.class, job, true); }catch (IOException e){ e.printStackTrace(); }finally { try { if(admin!=null) admin.close(); if(connection!=null) connection.close(); } catch (IOException e) { e.printStackTrace(); } } } private void initHbaseInputConfig(Job job) { job.setReducerClass(CountDurationReducer.class); job.setOutputKeyClass(ComDimension.class); job.setOutputValueClass(CountDurationValue.class); job.setOutputFormatClass(MysqlOutputFormat.class); } public static void main(String[] args) { try { int status = ToolRunner.run(new CountDurationRunner(), args); System.exit(status); } catch (Exception e) { e.printStackTrace(); } } } 这问题困扰很久了,有人说classPath不对,不知道如何修改,求助! ```
hadoop2.5.2 mapreduce作业失败
``` 16/06/14 03:26:45 INFO client.RMProxy: Connecting to ResourceManager at centos1/192.168.6.132:8032 16/06/14 03:26:47 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 16/06/14 03:26:47 INFO input.FileInputFormat: Total input paths to process : 1 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: number of splits:1 16/06/14 03:26:48 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1465885546873_0002 16/06/14 03:26:49 INFO impl.YarnClientImpl: Submitted application application_1465885546873_0002 16/06/14 03:26:49 INFO mapreduce.Job: The url to track the job: http://centos1:8088/proxy/application_1465885546873_0002/ 16/06/14 03:26:49 INFO mapreduce.Job: Running job: job_1465885546873_0002 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 running in uber mode : false 16/06/14 03:27:10 INFO mapreduce.Job: map 0% reduce 0% 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 failed with state FAILED due to: Application application_1465885546873_0002 failed 2 times due to Error launching appattempt_1465885546873_0002_000002. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:50334 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy32.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:118) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` 然后错误日志如下 ``` 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlaucher.AMLauncher: Setting up container Container: [ContainerId: container_1465885546873_0002_01_000001, NodeId: local:42709, NodeHttpAddress: local:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:42709 }, ] for AM appattempt_1465885546873_0002_000001 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1465885546873_0002_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-14 03:26:50,948 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:51,950 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:52,951 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:53,952 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:54,953 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:55,954 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:56,956 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:57,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:58,959 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,960 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,962 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error launching appattempt_1465885546873_0002_000001. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:42709 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ``` core-site.xml如下 ``` <configuration> <property> <name>ha.zookeeper.quorum</name> <value>centos1:2181,centos2:2181,centos3:2181</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop2.5</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> </configuration> ``` hdfs-site.xml如下 ``` <configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>centos1,centos2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos1</name> <value>centos1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos2</name> <value>centos2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos1</name> <value>centos1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos2</name> <value>centos2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://centos2:8485;centos3:8485;centos4:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop-data</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> ``` yarn-site.xml如下 ``` <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>centos1</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>centos1:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>centos1:8033</value> </property> </configuration> ``` mapred-site.xml如下 ``` <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> ``` slaves如下 ``` centos2 centos3 centos4 ``` hosts如下 ``` 127.0.0.1 local local.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.6.132 centos1 192.168.6.133 centos2 192.168.6.134 centos3 192.168.6.135 centos4 ```
相见恨晚的超实用网站
相见恨晚的超实用网站 持续更新中。。。
字节跳动视频编解码面经
三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时想着能进去就不错了,管他哪个岗呢,就同意了面试...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
python学习方法总结(内附python全套学习资料)
不要再问我python好不好学了 我之前做过半年少儿编程老师,一个小学四年级的小孩子都能在我的教学下独立完成python游戏,植物大战僵尸简单版,如果要肯花时间,接下来的网络开发也不是问题,人工智能也可以学个调包也没啥问题。。。。。所以python真的是想学就一定能学会的!!!! --------------------华丽的分割线-------------------------------- ...
python 简易微信实现(注册登录+数据库存储+聊天+GUI+文件传输)
socket+tkinter详解+简易微信实现 历经多天的努力,查阅了许多大佬的博客后终于实现了一个简易的微信O(∩_∩)O~~ 简易数据库的实现 使用pands+CSV实现数据库框架搭建 import socket import threading from pandas import * import pymysql import csv # 创建DataFrame对象 # 存储用户数据的表(...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
Python实例大全(基于Python3.7.4)
博客说明: 这是自己写的有关python语言的一篇综合博客。 只作为知识广度和编程技巧学习,不过于追究学习深度,点到即止、会用即可。 主要是基础语句,如三大控制语句(顺序、分支、循环),随机数的生成,数据类型的区分和使用; 也会涉及常用的算法和数据结构,以及面试题相关经验; 主体部分是针对python的数据挖掘和数据分析,主要先攻爬虫方向:正则表达式匹配,常用数据清洗办法,scrapy及其他爬虫框架,数据存储方式及其实现; 最后还会粗略涉及人工智能领域,玩转大数据与云计算、进行相关的预测和分析。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
JavaScript 为什么能活到现在?
作者 | 司徒正美 责编 |郭芮 出品 | CSDN(ID:CSDNnews) JavaScript能发展到现在的程度已经经历不少的坎坷,早产带来的某些缺陷是永久性的,因此浏览器才有禁用JavaScript的选项。甚至在jQuery时代有人问出这样的问题,jQuery与JavaScript哪个快?在Babel.js出来之前,发明一门全新的语言代码代替JavaScript...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
Python 编程开发 实用经验和技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法和技巧,包括小数保留指定位小数、判断变量的数据类型、类方法@classmethod、制表符中文对齐、遍历字典、datetime.timedelta的使用等,会持续更新......
吐血推荐珍藏的Visual Studio Code插件
作为一名Java工程师,由于工作需要,最近一个月一直在写NodeJS,这种经历可以说是一部辛酸史了。好在有神器Visual Studio Code陪伴,让我的这段经历没有更加困难。眼看这段经历要告一段落了,今天就来给大家分享一下我常用的一些VSC的插件。 VSC的插件安装方法很简单,只需要点击左侧最下方的插件栏选项,然后就可以搜索你想要的插件了。 下面我们进入正题 Material Theme ...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
相关热词 c#交错数组 c# task停止 c#使用mongodb c#入门经典第七版 c#设置超时程序 c#一个日期格式加上时分 c# 按行读取excel c#画图固定 c# 读取dataset 如何c#按钮透明
立即提问