hadoop2.6.5集群master启动时只能启动自身作为datanode,slave节点无法控制且没有日志?

1个master,2个slave,多次格式化删除/usr/local/hadoop/logs与/tmp无效
报错日志显示:
2019-05-22 15:43:46,212 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: Master/219.226.109.130:9000
但是所有节点防火墙均为关闭状态

/etc/hosts中配置均为:
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
219.226.109.130 Master
219.226.109.129 Slave1
219.226.109.131 Slave2

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hadoop2.6搭建 格式化出现错误
log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /var/log/hadoop/hadoop/hdfs-audit.log (没有那个文件或目录) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:221) at java.io.FileOutputStream.<init>(FileOutputStream.java:142) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) 15/10/12 16:15:27 WARN namenode.NameNode: Encountered exception during format: java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 FATAL namenode.NameNode: Exception in namenode join java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 INFO util.ExitUtil: Exiting with status 1 15/10/12 16:15:27 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************
hadoop 2.6.0升级到2.6.3
已经根据官网上的方法进行了操作。 1、下载了hadoop2.6.3 2、rollingUpgrade prepare 3、停止standby的namenode,hadoop2.6.3环境中执行rollingUpgrade started 4、切换active和standy的namenode,新的standby的namenode重复3 5、更新datanode 6、finalized操作 在执行3之后会有很多INFO级别的信息打印,是否不需要理会?而且以此方式启动的namenode没有pid(hadoop/pids文件夹下没有namenode)是不是有问题?且最后更新完datanode执行finalized操作提示没有progress rollingUpgrade。 有朋友升级过么?我的升级操作哪里有问题?
Hadoop 2.2运行wordcount报错
hadoop 2.2 + jdk1.7 运行wordcount例子 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /word /ws 报错: org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1449733659077_0001_m_000000_0: Error: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.InputSplit 请各位高手指点
求救啊!Hadoop 2.2.0 搭建集群 启动hdfs时候 namenode 启动后报空指针
日志如下: 2015-02-07 01:01:46,610 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring NN shutdown. Shutting down immediately. java.lang.NullPointerException at org.apache.hadoop.hdfs.DFSUtil.substituteForWildcardAddress(DFSUtil.java:942) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.getHttpAddress(StandbyCheckpointer.java:108) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.setNameNodeAddresses(StandbyCheckpointer.java:90) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.<init>(StandbyCheckpointer.java:76) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startStandbyServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startStandbyServices(NameNode.java:1456) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.enterState(StandbyState.java:58) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:686) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 2015-02-07 01:01:46,614 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-02-07 01:01:46,620 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 我就不明白了,为什么就一直报空指针,而且,远程调试的时候就不会报错,已经凌乱了。
Hadoop2.6.0集群用hadoop df -mkdir -p创建文件
我用hadoop2.6.0创建了一个master和4个worker的集群,启动hdfs后,用hadoop fs -mkdir -p /data/wordcount文件夹后,在worker:50075上看不到我新建的这个文件, 请问下各位是为什么
maven3.3.9编译hadoop2.6.5报错 帮忙解决问题
[INFO] Building Apache Hadoop Main 2.6.5 [INFO] ------------------------------------------------------------------------ Downloading: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Could not validate integrity of download from http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml: Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml Downloaded: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml (99 KB at 11.8 KB/sec) [WARNING] The metadata /root/.m2/repository/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata-ibiblio.org.xml is invalid: end tag name </body> must match start tag name <hr> from line 888 (position: START_TAG seen ... 08-Nov-2014 19:04 207\r\n</pre><hr></body>... @888:18) [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................. FAILURE [ 8.416 s] [INFO] Apache Hadoop Build Tools .......................... SKIPPED [INFO] Apache Hadoop Project POM .......................... SKIPPED [INFO] Apache Hadoop Annotations .......................... SKIPPED [INFO] Apache Hadoop Assemblies ........................... SKIPPED [INFO] Apache Hadoop Project Dist POM ..................... SKIPPED [INFO] Apache Hadoop Maven Plugins ........................ SKIPPED [INFO] Apache Hadoop MiniKDC .............................. SKIPPED [INFO] Apache Hadoop Auth ................................. SKIPPED [INFO] Apache Hadoop Auth Examples ........................ SKIPPED [INFO] Apache Hadoop Common ............................... SKIPPED [INFO] Apache Hadoop NFS .................................. SKIPPED [INFO] Apache Hadoop KMS .................................. SKIPPED [INFO] Apache Hadoop Common Project ....................... SKIPPED [INFO] Apache Hadoop HDFS ................................. SKIPPED [INFO] Apache Hadoop HttpFS ............................... SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED [INFO] Apache Hadoop HDFS Project ......................... SKIPPED [INFO] hadoop-yarn ........................................ SKIPPED [INFO] hadoop-yarn-api .................................... SKIPPED [INFO] hadoop-yarn-common ................................. SKIPPED [INFO] hadoop-yarn-server ................................. SKIPPED [INFO] hadoop-yarn-server-common .......................... SKIPPED [INFO] hadoop-yarn-server-nodemanager ..................... SKIPPED [INFO] hadoop-yarn-server-web-proxy ....................... SKIPPED [INFO] hadoop-yarn-server-applicationhistoryservice ....... SKIPPED [INFO] hadoop-yarn-server-resourcemanager ................. SKIPPED [INFO] hadoop-yarn-server-tests ........................... SKIPPED [INFO] hadoop-yarn-client ................................. SKIPPED [INFO] hadoop-yarn-applications ........................... SKIPPED [INFO] hadoop-yarn-applications-distributedshell .......... SKIPPED [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SKIPPED [INFO] hadoop-yarn-site ................................... SKIPPED [INFO] hadoop-yarn-registry ............................... SKIPPED [INFO] hadoop-yarn-project ................................ SKIPPED [INFO] hadoop-mapreduce-client ............................ SKIPPED [INFO] hadoop-mapreduce-client-core ....................... SKIPPED [INFO] hadoop-mapreduce-client-common ..................... SKIPPED [INFO] hadoop-mapreduce-client-shuffle .................... SKIPPED [INFO] hadoop-mapreduce-client-app ........................ SKIPPED [INFO] hadoop-mapreduce-client-hs ......................... SKIPPED [INFO] hadoop-mapreduce-client-jobclient .................. SKIPPED [INFO] hadoop-mapreduce-client-hs-plugins ................. SKIPPED [INFO] Apache Hadoop MapReduce Examples ................... SKIPPED [INFO] hadoop-mapreduce ................................... SKIPPED [INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED [INFO] Apache Hadoop Distributed Copy ..................... SKIPPED [INFO] Apache Hadoop Archives ............................. SKIPPED [INFO] Apache Hadoop Rumen ................................ SKIPPED [INFO] Apache Hadoop Gridmix .............................. SKIPPED [INFO] Apache Hadoop Data Join ............................ SKIPPED [INFO] Apache Hadoop Ant Tasks ............................ SKIPPED [INFO] Apache Hadoop Extras ............................... SKIPPED [INFO] Apache Hadoop Pipes ................................ SKIPPED [INFO] Apache Hadoop OpenStack support .................... SKIPPED [INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED [INFO] Apache Hadoop Client ............................... SKIPPED [INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED [INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED [INFO] Apache Hadoop Tools Dist ........................... SKIPPED [INFO] Apache Hadoop Tools ................................ SKIPPED [INFO] Apache Hadoop Distribution ......................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 06:03 min [INFO] Finished at: 2018-06-23T11:25:17+08:00 [INFO] Final Memory: 27M/69M [INFO] ------------------------------------------------------------------------ [ERROR] Error resolving version for plugin 'org.apache.maven.plugins:maven-javadoc-plugin' from the repositories [local (/root/.m2/repository), ibiblio.org (http://mirrors.ibiblio.org/pub/mirrors/maven2)]: Plugin not found in any plugin repository -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginVersionResolutionException You have new mail in /var/spool/mail/root
cdh的hadoop版本2.6.0-5.4.5 怎么升级到 2.6.0-5.13.0 ?
目前生产环境用的hadoop版本是Hadoop 2.6.0-cdh5.4.5 ,现在想升级到 更高的版本 2.6.0-cdh5.13.0 ,并且hadoop集群没有使用cm管理,完全是用二进制源码安装的,目前上面只有 hdfs hive ,其他的都没有用,要怎么样才能升级了?
hadoop 2.6 namenode创建失败
(前面都正常) 2016-03-23 08:30:10,036 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 2016-03-23 08:30:10,040 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-03-23 08:30:10,140 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-03-23 08:30:10,141 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 2016-03-23 08:30:10,142 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-03-23 08:30:10,144 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/
centos6.8搭建hadoop2.X伪分布式无法启动namenode
能够格式化节点信息,但是namenode无法启动。在日志中出现如下错误 ``` STARTUP_MSG: build = Unknown -r Unknown; compiled by 'root' on 2017-05-22T10:49Z STARTUP_MSG: java = 1.8.0_144 ************************************************************/ 2020-01-31 16:37:06,931 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2020-01-31 16:37:06,935 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 2020-01-31 16:37:07,161 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2020-01-31 16:37:07,233 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2020-01-31 16:37:07,233 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2020-01-31 16:37:07,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://hadoop101:9000 2020-01-31 16:37:07,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use hadoop101:9000 to access this namenode/service. 2020-01-31 16:37:07,409 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://huawei_mate_10-53013e4c60:50070 2020-01-31 16:37:07,457 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2020-01-31 16:37:07,464 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2020-01-31 16:37:07,469 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2020-01-31 16:37:07,473 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2020-01-31 16:37:07,475 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: The value of property bind.address must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1134) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1115) at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:398) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:351) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:114) at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:290) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:126) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:752) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:638) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2020-01-31 16:37:07,477 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2020-01-31 16:37:07,479 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop101/192.168.117.101 ************************************************************/ ``` 主要的报错信息是 java.lang.IllegalArgumentException: The value of property bind.address must not be null core-site.xml的配置信息 <configuration> <!-- 指定HDFS中NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop101:9000</value> </property> <!-- hadoop101已经在hosts文件中配置 --> <!-- 指定Hadoop运行时产生文件的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/hadoop-2.7.2/data/tmp</value> </property> </configuration> 希望大神能够帮忙解答一下。万分感谢感谢
hbase0.98.15+hadoop2.6+zookeeper3.4.6环境搭建问题
系统:centos 7 分布式环境,现有3个节点,主节点node0,hadoop中node0,node1,node2同时作为datanode的节点 启动hbase时主节点没有Hmaster线程 hbase的日志错误如下: ``` 2015-11-13 16:23:59,178 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,node0:2181,node2:2181 sessionTimeout=60000 watcher=master:600000x0, quorum=node1:2181,node0:2181,node2:2181, baseZNode=/hbase 2015-11-13 16:23:59,195 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Opening socket connection to server node1/192.168.0.161:2181. Will not attempt to authenticate using SASL (unknown error) 2015-11-13 16:23:59,200 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Socket connection established to node1/192.168.0.161:2181, initiating session 2015-11-13 16:23:59,211 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Session establishment complete on server node1/192.168.0.161:2181, sessionid = 0x150feccba590005, negotiated timeout = 40000 2015-11-13 16:23:59,223 INFO [main] zookeeper.RecoverableZooKeeper: Node /hbase already exists and this is not a retry 2015-11-13 16:23:59,245 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting 2015-11-13 16:23:59,246 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: starting 2015-11-13 16:23:59,301 INFO [master:namenode:60000] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2015-11-13 16:23:59,344 INFO [master:namenode:60000] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2015-11-13 16:23:59,346 INFO [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2015-11-13 16:23:59,346 INFO [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-13 16:23:59,355 INFO [master:namenode:60000] http.HttpServer: Jetty bound to port 60010 2015-11-13 16:23:59,355 INFO [master:namenode:60000] mortbay.log: jetty-6.1.26 2015-11-13 16:23:59,656 INFO [master:namenode:60000] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 2015-11-13 16:23:59,724 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available 2015-11-13 16:23:59,725 INFO [master:namenode:60000] master.ActiveMasterManager: Registered Active Master=namenode,60000,1447403038605 2015-11-13 16:23:59,731 INFO [master:namenode:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 2015-11-13 16:23:59,875 FATAL [master:namenode:60000] master.HMaster: Unhandled exception. Starting shutdown. java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.<init>(Ljava/util/zip/Checksum;II)V at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1342) at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371) at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1403) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1382) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1307) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:384) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:380) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:380) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:324) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775) at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:664) at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:642) at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:599) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:481) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:154) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684) at java.lang.Thread.run(Thread.java:745) 2015-11-13 16:23:59,876 INFO [master:namenode:60000] master.HMaster: Aborting 2015-11-13 16:23:59,877 DEBUG [master:namenode:60000] master.HMaster: Stopping service threads 2015-11-13 16:23:59,877 INFO [master:namenode:60000] ipc.RpcServer: Stopping server on 60000 2015-11-13 16:23:59,877 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping 2015-11-13 16:23:59,877 INFO [master:namenode:60000] master.HMaster: Stopping infoServer 2015-11-13 16:23:59,877 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped 2015-11-13 16:23:59,877 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 2015-11-13 16:23:59,879 INFO [master:namenode:60000] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 2015-11-13 16:23:59,994 INFO [master:namenode:60000] zookeeper.ZooKeeper: Session: 0x150feccba590005 closed 2015-11-13 16:23:59,994 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2015-11-13 16:23:59,994 INFO [master:namenode:60000] master.HMaster: HMaster main thread exiting 2015-11-13 16:23:59,995 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:201) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3062) ``` 每个节点上hadoop和zookeeper的进程都有,hregionserver进程也有,就是主节点没有hmaster,配置文件什么的已经对过很多遍了 另外有一个奇怪的问题是,每次系统重启过后,启动hadoop,可以看到相应的进程,但是在外部无法访问http://node0:50070端口,必须要输入iptables -F才可以访问,zookeeper启动过后,如果没有输入过上述命令,zookeeper的MODE也显示不出来,也就是集群模式没有启动成功
hadoop2.2.0集群rm配置了HA,但nodemanager无法与resourcemanager通信
yarn-site.xml: <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>11.24.88.242</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>11.24.88.244</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>11.20.26.6:2181,11.20.26.2:2181,11.20.26.3:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> 在yarn.site.xml中MR配置了HA,但一直报错,datanode一直与 0.0.0.0:8031通信,却不与MRtong'x: 2019-08-13 13:33:26,799 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From hadoop7/11.20.200.197 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:181) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:199) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:339) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:386) Caused by: java.net.ConnectException: Call From hadoop7/11.20.200.197 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1351) at org.apache.hadoop.ipc.Client.call(Client.java:1300) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy23.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at $Proxy24.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:238) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:175) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642) at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399) at org.apache.hadoop.ipc.Client.call(Client.java:1318)
求教,hadoop-2.2.0升级hadoop-2.6.0。
最近需要升级hadoop,从hadoop-2.2.0升级到hadoop-2.6.0,根据http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade 提示的来,第一步:./bin/hdfs dfsadmin -rollingUPgrade prepare 就出现了:PREPARE rolling upgrade ... rollingUpgrade: Unknown method rollingUpgrade called on org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
hadoop2.5.2无法执行wordcount及-put操作
hadoop2.5.2 一个master,两个slave,名字分别为slave1和slave2,master启动后如下: 30784 NameNode 31394 Jps 30972 SecondaryNameNode 31132 ResourceManager slave1和slave2启动后都为如下 8064 Jps 7943 NodeManager 7834 DataNode 感觉没什么异常,然后我在master上执行 hadoop fs -put README.txt /input 一直不动,最后报错 17/03/09 19:59:11 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 17/03/09 19:59:11 INFO hdfs.DFSClient: Abandoning BP-247473795-10.202.15.17-1489054138763:blk_1073741827_1003 17/03/09 19:59:11 INFO hdfs.DFSClient: Excluding datanode 10.202.15.175:50010 17/03/09 20:01:18 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 17/03/09 20:01:18 INFO hdfs.DFSClient: Abandoning BP-247473795-10.202.15.17-1489054138763:blk_1073741828_1004 17/03/09 20:01:18 INFO hdfs.DFSClient: Excluding datanode 10.202.15.174:50010 17/03/09 20:01:18 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2791) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:606) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:455) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy9.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy9.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1449) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1270) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) put: File /input/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. 所有的机器都已经关闭了防火墙,也多次删除hadoop.tmp.dir dfs.name.dir dfs.data.dir 对应的文件,并且多次hadoop namenode -format,依然如此,但如果我把 hadoop fs -put README.txt /input 放到slave上执行,不会报错,可以复制过去,三台机器都有这个文件,请各位大神帮忙解答,已经困扰我好几天了。
Hadoop2.2.0搭建过程中namenode初始化报错
HDFS初始化namenode报错,求大神帮帮忙!!! FATAL namenode.NameNode: Exception in namenode join java.lang.ClassCastException: com.sun.org.apache.xerces.internal.dom.DeferredElementNSImpl cannot be cast to org.w3c.dom.Text at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2111) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2001) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1918) at org.apache.hadoop.conf.Configuration.get(Configuration.java:721) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:740) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:965) at org.apache.hadoop.security.Groups.<init>(Groups.java:62) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214) at org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:275) at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:269) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 15/04/13 04:15:01 INFO util.ExitUtil: Exiting with status 1 15/04/13 04:15:02 INFO namenode.NameNode: SHUTDOWN_MSG:
hadoop2.x集群部署一种一个datanode无法启动
Exception in secureMain java.net.UnknownHostException: node1: node1 at java.net.InetAddress.getLocalHost(InetAddress.java:1473) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:187) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:207) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2153) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402) Caused by: java.net.UnknownHostException: node1 at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 6 more 2015-01-16 09:08:54,152 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-01-16 09:08:54,164 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: node1: node1 ************************************************************/ 环境ubuntu,hadoop2.6,jdk7 [排比句](http://www.zaojuzi.com/paibiju/ "")部署三台虚拟机一台namenode,两台datanode;/etc/hostname 都已经配置分布为master,node1,node2 /etc/hosts配置为: 27.0.0.1 localhost 127.0.1.1 ubuntu.localdomain ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.184.129 master 192.168.184.130 node1 192.168.184.131 node2 hadoop/etc/hadoo/slaves配置为[造句](http://www.zaojuzi.com/ ""): node1 node2 core-site.xml配置为: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yangwq/hadoop-2.6.0/temp</value> <description>A base for other temporary directories.</description> </property> </configuration> hdfs-site.xml配置为: <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/data</value> </property> </configuration> mapred-site.xml配置为: <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> </configuration> yarn-site.xml配置为: <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!-- resourcemanager hostname或ip地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> </configuration> 在启动的时候node1节点的datanode一直无法启动,同时通过ssh登录各节点都是正常。
hadoop2.6使用snappy.报错
hadoop2.6使用snappy.报错native snappy library not available: this version of libhadoop was built without snappy support. ![图片说明](https://img-ask.csdn.net/upload/201505/05/1430759464_528696.png) hadoop里面明明已经显示启用了snappy
Ubuntu中关于Hadoop2.6.0的安装
求教: 1.我的程序运行到 bin/hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-sources.jar 时, 发生错误:RunJar jarFile [mainClass] args..., 2.我的程序运行到 /usr/local/hadoop$ org.apache.hadoop.examples.WordCount input output 发生错误: org.apache.hadoop.examples.WordCount:未找到命令 我本来是想在hadoop没有对配置文件进行处理前,进行一下测试,但是出现这种情况,请问怎么处理,谢谢!
mac 编译Hadoop2.6出错
INFO] Scanning for projects... [WARNING] [WARNING] Some problems were encountered while building the effective model for org.apache.hadoop:hadoop-project:pom:2.6.0 [WARNING] 'dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: javax.servlet.jsp:jsp-api:jar -> duplicate declaration of version 2.1 @ line 563, column 19 [WARNING] 'dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: org.apache.curator:curator-framework:jar -> duplicate declaration of version 2.6.0 @ line 915, column 18 [WARNING] 'dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: org.apache.curator:curator-test:jar -> duplicate declaration of version 2.6.0 @ line 920, column 18 [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but found duplicate declaration of plugin org.apache.maven.plugins:maven-enforcer-plugin @ line 1154, column 15 [WARNING] [WARNING] Some problems were encountered while building the effective model for org.apache.hadoop:hadoop-project-dist:pom:2.6.0 [WARNING] 'dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: javax.servlet.jsp:jsp-api:jar -> duplicate declaration of version 2.1 @ org.apache.hadoop:hadoop-project:2.6.0, /Users/xyj/App/hadoop-2.6.0-src/hadoop-project/pom.xml, line 563, column 19 [WARNING] 'dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: org.apache.curator:curator-framework:jar -> duplicate declaration of version 2.6.0 @ org.apache.hadoop:hadoop-project:2.6.0, /Users/xyj/App/hadoop-2.6.0-src/hadoop-project/pom.xml, line 915, column 18 执行了mvn package -DskipTests -Pdist,native -Dtar报上面的错误。。求解
hadoop2.5.0-cdh5.3.1, 如何选择相配的spark?
我装hadoop2.5.0-cdh5.3.1 + spark1.2.1-bin-hadoop2.4.tgz,发现许多问题。是不是版本不兼容。 请朋友们帮助! 另外,如何编译 jar? 因为未发现build or sbt path.
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
阿里P8数据架构师:顶级开发者都在用什么数据库?
其实从去年已经隐隐约约感觉到数据库的有变化,只是没有想到变得这么快。今年的一些事情实实在在地给了某些数据库重击,如果以前去某数据库还是喊喊,然后该用还用,今年从传统领域刮起的去某数据库的风,已经开始了,并且后面的乌云密布也看得见。 最近看一篇国外的开源产品提供厂商的一篇文字,主要是在询问了他的几百位客户后得出了下图中的2019年数据库的使用趋势。 从图中可以看出,MySQL以38.9...
面试官:关于Java性能优化,你有什么技巧
通过使用一些辅助性工具来找到程序中的瓶颈,然后就可以对瓶颈部分的代码进行优化。 一般有两种方案:即优化代码或更改设计方法。我们一般会选择后者,因为不去调用以下代码要比调用一些优化的代码更能提高程序的性能。而一个设计良好的程序能够精简代码,从而提高性能。 下面将提供一些在JAVA程序的设计和编码中,为了能够提高JAVA程序的性能,而经常采用的一些方法和技巧。 1.对象的生成和大小的调整。 J...
互联网公司分布式系统架构演进之路
介绍 分布式和集群的概念经常被搞混,现在一句话让你明白两者的区别。 分布式:一个业务拆分成多个子业务,部署在不同的服务器上 集群:同一个业务,部署在多个服务器上 例如:电商系统可以拆分成商品,订单,用户等子系统。这就是分布式,而为了应对并发,同时部署好几个用户系统,这就是集群 1 单应用架构 2 应用服务器和数据库服务器分离 单机负载越来越来,所以要将应用服务器和数据库服务器分离 3 应用服务...
【图解算法面试】记一次面试:说说游戏中的敏感词过滤是如何实现的?
版权声明:本文为苦逼的码农原创。未经同意禁止任何形式转载,特别是那些复制粘贴到别的平台的,否则,必定追究。欢迎大家多多转发,谢谢。 小秋今天去面试了,面试官问了一个与敏感词过滤算法相关的问题,然而小秋对敏感词过滤算法一点也没听说过。于是,有了下下事情的发生… 面试官开怼 面试官:玩过王者荣耀吧?了解过敏感词过滤吗?,例如在游戏里,如果我们发送“你在干嘛?麻痹演员啊你?”,由于“麻痹”是一个敏感词,...
程序员需要了解的硬核知识之汇编语言(一)
之前的系列文章从 CPU 和内存方面简单介绍了一下汇编语言,但是还没有系统的了解一下汇编语言,汇编语言作为第二代计算机语言,会用一些容易理解和记忆的字母,单词来代替一个特定的指令,作为高级编程语言的基础,有必要系统的了解一下汇编语言,那么本篇文章希望大家跟我一起来了解一下汇编语言。 汇编语言和本地代码 我们在之前的文章中探讨过,计算机 CPU 只能运行本地代码(机器语言)程序,用 C 语言等高级语...
OpenCV-Python 绘图功能 | 七
目标 学习使用OpenCV绘制不同的几何形状 您将学习以下功能:cv.line(),cv.circle(),cv.rectangle(),cv.ellipse(),cv.putText()等。 代码 在上述所有功能中,您将看到一些常见的参数,如下所示: img:您要绘制形状的图像 color:形状的颜色。对于BGR,将其作为元组传递,例如:(255,0,0)对于蓝色。对于灰度,只需传递...
GitHub 标星 1.6w+,我发现了一个宝藏项目,作为编程新手有福了!
大家好,我是 Rocky0429,一个最近老在 GitHub 上闲逛的蒟蒻… 特别惭愧的是,虽然我很早就知道 GitHub,但是学会逛 GitHub 的时间特别晚。当时一方面是因为菜,看着这种全是英文的东西难受,不知道该怎么去玩,另一方面是一直在搞 ACM,没有做一些工程类的项目,所以想当然的以为和 GitHub 也没什么关系(当然这种想法是错误的)。 后来自己花了一个星期看完了 Pyt...
Java知识体系最强总结(2020版)
更新于2019-12-15 10:38:00 本人从事Java开发已多年,平时有记录问题解决方案和总结知识点的习惯,整理了一些有关Java的知识体系,这不是最终版,会不定期的更新。也算是记录自己在从事编程工作的成长足迹,通过博客可以促进博主与阅读者的共同进步,结交更多志同道合的朋友。特此分享给大家,本人见识有限,写的博客难免有错误或者疏忽的地方,还望各位大佬指点,在此表示感激不尽。 文章目录...
计算机专业的书普遍都这么贵,你们都是怎么获取资源的?
介绍几个可以下载编程电子书籍的网站。 1.Github Github上编程书资源很多,你可以根据类型和语言去搜索。推荐几个热门的: free-programming-books-zh_CN:58K 星的GitHub,编程语言、WEB、函数、大数据、操作系统、在线课程、数据库相关书籍应有尽有,共有几百本。 Go语言高级编程:涵盖CGO,Go汇编语言,RPC实现,Protobuf插件实现,Web框架实...
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Fiddler+夜神模拟器进行APP抓包
Fiddler+夜神模拟器进行APP抓包 作者:霞落满天 需求:对公司APP进行抓包获取详细的接口信息,这是现在开发必备的。 工具:Fiddler抓包,夜神模拟器 模拟手机 安装APP 1.下载Fiddler https://www.telerik.com/download/fiddler Fiddler正是在这里帮助您记录计算机和Internet之间传递的所有HTTP和HTTPS通信...
小白学 Python 爬虫(42):春节去哪里玩(系列终篇)
人生苦短,我用 Python 前文传送门: 小白学 Python 爬虫(1):开篇 小白学 Python 爬虫(2):前置准备(一)基本类库的安装 小白学 Python 爬虫(3):前置准备(二)Linux基础入门 小白学 Python 爬虫(4):前置准备(三)Docker基础入门 小白学 Python 爬虫(5):前置准备(四)数据库基础 小白学 Python 爬虫(6):前置准备(...
一文带你看清 HTTP 所有概念
上一篇文章我们大致讲解了一下 HTTP 的基本特征和使用,大家反响很不错,那么本篇文章我们就来深究一下 HTTP 的特性。我们接着上篇文章没有说完的 HTTP 标头继续来介绍(此篇文章会介绍所有标头的概念,但没有深入底层) HTTP 标头 先来回顾一下 HTTP1.1 标头都有哪几种 HTTP 1.1 的标头主要分为四种,通用标头、实体标头、请求标头、响应标头,现在我们来对这几种标头进行介绍 通用...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
[数据结构与算法] 排序算法之归并排序与基数排序
归并排序 归并排序(MERGE-SORT)是利用归并的思想实现的排序方法,该算法采用经典的分治(divide-and-conquer)策略(分治法将问题分(divide)成一些小的问题然后递归求解,而治(conquer)的阶段则将分的阶段得到的各答案"修补"在一起,即分而治之)。 基本思想 可以看到这种结构很像一棵完全二叉树,本文的归并排序我们采用递归去实现(也可采用迭代的方式去实现)。分阶段可以...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
史上最牛逼的 Eclipse 快捷键,提高开发效率!
如果你在使用IDEA,请参考博主另外的一篇idea快捷键的博客。
相关热词 c# singleton c#中类的默认值是 c#各种进制之间的转换 c# 正则表达式保留汉字 c#后台跨域 c#基础代码大全 c#指定combox选择 c#关系 mono c# 相差毫秒 用c#做一个简易计算器
立即提问