hadoop报错(Failed to start namenode) 5C

2018-05-29 18:42:42,749 ERROR namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.(File.java:423)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:341)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:288)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:259)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1169)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1631)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)

hdfs-site.xml文件

**cat hdfs-site.xml **

<!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->

dfs.replication
3


dfs.nameservices
ns1

<!-- ns1下面有两个NameNode,分别是nn1,nn2 -->

dfs.ha.namenodes.ns1
nn1,nn2,nn3

<!-- nn1的RPC通信地址 -->

dfs.namenode.rpc-address.ns1.nn1
pinpoint-1:9000

<!-- nn1的http通信地址 -->

dfs.namenode.http-address.ns1.nn1
pinpoint-1:50070

<!-- nn2的RPC通信地址 -->

dfs.namenode.rpc-address.ns1.nn2
pinpoint-2:9000


dfs.namenode.http-address.ns1.nn2
pinpoint-2:50070

<!-- nn3的RPC通信地址 -->

dfs.namenode.rpc-address.ns1.nn3
pinpoint-3:9000


dfs.namenode.http-address.ns1.nn3
pinpoint-3:50070

<!-- 指定NameNode的元数据在JournalNode上的存放位置 -->

dfs.namenode.shared.edits.dir
qjournal://pinpoint-1:8485;pinpoint-2:8485;pinpoint-3:8485/ns1

<!-- 指定JournalNode在本地磁盘存放数据的位置 -->

dfs.journalnode.edits.dir
/pinpoint/data/zookeeper/hadoop/journaldata


dfs.datanode.data.dir
/pinpoint/data/zookeeper/hadoop/dfs/data

<!-- 开启NameNode失败自动切换 -->

dfs.ha.automatic-failover.enabled
true

<!-- 配置失败自动切换实现方式 -->

dfs.client.failover.proxy.provider.ns1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->

dfs.ha.fencing.methods

sshfence
shell(/bin/true)


<!-- 使用sshfence隔离机制时需要ssh免登陆 -->

dfs.ha.fencing.ssh.private-key-files
/root/.ssh/id_rsa

<!-- 配置sshfence隔离机制超时时间 -->

dfs.ha.fencing.ssh.connect-timeout
30000

core-site.xml 文件

cat core-site.xml


<!--指定namenode的地址-->

fs.defaultFS
hdfs://ns1:9000

<!--用来指定使用hadoop时产生文件的存放目录-->

hadoop.tmp.dir
file:///pinpoint/data/hadoop/tmp

<!--用来设置检查点备份日志的最长时间-->
fs.checkpoint.period
3600

1个回答

file:///pinpoint/data/hadoop/tmp 这个改为 /pinpoint/data/hadoop/tmp

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
ubuntu下配置hadoop环境 在 format namenode 时遇到的问题

前面的配置都按照教程配置好了,然后就在terminal里敲了 hdfs namenode -format, 然后就出现了这样的错误: FATAL namenode.NameNode: Exception in namenode join java.lang.NullPointerException at java.io.File.(File.java:277) at org.apache.hadoop.hdfs.server.namenode.NNStorage.setStorageDirectories(NNStorage.java:300) at org.apache.hadoop.hdfs.server.namenode.NNStorage.(NNStorage.java:161) at org.apache.hadoop.hdfs.server.namenode.FSImage.(FSImage.java:127) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:829) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 后来 jps 发现 namenode没有启动, 试过删除 namenode 和 datanode 的文件夹,重新建文件夹,再格式化 namenode,还是没能解决。 请各位帮忙看看!

windows 配置spark成功 但是无法启动Hdfs namenode

16/08/22 09:44:14 ERROR namenode.NameNode: Failed to start namenode. java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.a ccess0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:6 09) at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyze Storage(Storage.java:490) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSI mage.java:322) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead( FSImage.java:215) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam esystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa mesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo de.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j ava:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java: 811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java: 795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo de.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:15 54 求各位大神帮助,配置的是单机版的,spark已可以运行,但是hdfs不可以

hadoop namenode的问题

第一次启动全分布式的时候可以正常启动jps对应的服务都能正常启动,50070的浏览器页面也能正常打卡,但是刷新一下就出不来了,再次jps查看进程发现下面的错误![图片说明](https://img-ask.csdn.net/upload/201708/21/1503306736_113938.png),不知道是什么原因造成的

HadoopHA环境搭建过程中namenode格式化出错,求大神解答一下

错误如下: 16/12/27 19:25:48 ERROR namenode.FSNamesystem: FSNamesystem initialization failed. java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 16/12/27 19:25:48 INFO namenode.FSNamesystem: Stopping services started for active state 16/12/27 19:25:48 INFO namenode.FSNamesystem: Stopping services started for standby state 16/12/27 19:25:48 WARN namenode.NameNode: Encountered exception during format: java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 16/12/27 19:25:48 ERROR namenode.NameNode: Failed to start namenode. java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 16/12/27 19:25:48 INFO util.ExitUtil: Exiting with status 1 16/12/27 19:25:48 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop-tjf/192.168.1.105 ************************************************************/

hadoop namenode无法启动

INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: dao:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system... 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: NameNode metrics system stopped. 17/03/16 10:54:06 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 17/03/16 10:54:06 FATAL namenode.NameNode: Failed to start namenode. java.net.BindException: Port in use: dao:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886) ... 8 more

hadoop 2.6 namenode创建失败

(前面都正常) 2016-03-23 08:30:10,036 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 2016-03-23 08:30:10,040 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-03-23 08:30:10,140 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-03-23 08:30:10,141 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 2016-03-23 08:30:10,142 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-03-23 08:30:10,144 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/

hadoop2.7.2搭建分布式环境,格式化后,namenode没启动成功

第一步:执行hadoop namenode -formate STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z STARTUP_MSG: java = 1.7.0_76 ************************************************************/ 16/08/02 04:26:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/08/02 04:26:16 INFO namenode.NameNode: createNameNode [-formate] Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-rollback] | [-rollingUpgrade <rollback|downgrade|started> ] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force] ] | [-metadataVersion ] ] 16/08/02 04:26:16 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100 第二步:执行start-all.sh 结果如下: [root@master sbin]# sh start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 16/08/02 05:45:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master] master: starting namenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-master.out slave2: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave2.out slave3: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave3.out slave1: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-secondarynamenode-master.out 16/08/02 05:46:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-master.out slave2: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave2.out slave3: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave3.out slave1: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave1.out [root@master sbin]# jps 2613 ResourceManager 2467 SecondaryNameNode 2684 Jps namenode日志: 2016-08-02 05:49:49,910 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,928 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-08-02 05:49:49,928 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 2016-08-02 05:49:49,930 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 2016-08-02 05:49:49,934 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-08-02 05:49:49,935 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,949 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-08-02 05:49:49,961 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100

CDH主节点服务器 重启后,无法启动 namenode ,namenode格式化也不成功,如何解决

*日志如下: org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$LogHeaderCorruptException: Unexpected version of the file system log file: -1. Current version = -60. at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.readLogVersion(EditLogFileInputStream.java:397) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:146) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.getVersion(EditLogFileInputStream.java:266) at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.validateEditLog(EditLogFileInputStream.java:318) at org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.validateLog(FileJournalManager.java:544) at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.recoverUnfinalizedSegments(FileJournalManager.java:406) at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1478) at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:827) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:686) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,421 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for (journal JournalAndStream(mgr=FileJournalManager(root=/dfs/nn), stream=null)) org.apache.hadoop.hdfs.server.namenode.JournalManager$CorruptionException: In-progress edit log file is corrupt: EditLogFile(file=/dfs/nn/current/edits_inprogress_0000000000000109383.corrupt,first=0000000000000109383,last=-000000000000012345,inProgress=true,hasCorruptHeader=true) at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.recoverUnfinalizedSegments(FileJournalManager.java:410) at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393) at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1478) at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:827) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:686) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,459 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Disabling journal JournalAndStream(mgr=FileJournalManager(root=/dfs/nn), stream=null) 2018-02-11 17:17:47,460 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for too many journals 2018-02-11 17:17:47,460 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Skipping jas JournalAndStream(mgr=FileJournalManager(root=/dfs/nn), stream=null) since it's disabled 2018-02-11 17:17:47,471 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 109382 but unable to find any edit logs containing txid 109382 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1617) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1575) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:704) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,774 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@alexnode1.cdh:50070 2018-02-11 17:17:47,785 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 2018-02-11 17:17:47,785 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 2018-02-11 17:17:47,910 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2018-02-11 17:17:47,911 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2018-02-11 17:17:47,911 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2018-02-11 17:17:47,911 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 109382 but unable to find any edit logs containing txid 109382 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1617) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1575) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:704) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615) 2018-02-11 17:17:47,913 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2018-02-11 17:17:48,085 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at alexnode1.cdh/192.168.1.68 ************************************************************/

hadoop配置zookeeper,启动的时候namenode节点日志有异常

hadoop搭建zookeeper,启动都正常,日志也没有报错,上传文件都好使,但是namenode有一个异常 2015-12-31 22:49:58,753 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [192.168.254.12:8485, 192.168.254.13:8485, 192.168.254.14:8485]. Skipping. org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown: 192.168.254.12:8485: Call From host5/192.168.254.15 to host2:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.254.14:8485: Call From host5/192.168.254.15 to host4:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 192.168.254.13:8485: Call From host5/192.168.254.15 to host3:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:460) at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:252) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1237) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1265) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1249) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:209) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:321) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292) 2015-12-31 22:49:58,900 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state 2015-12-31 22:49:58,900 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:334) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)

namenode启动问题日志

java.net.BindException: Problem binding to [vm01:9000] java.net.BindException: Cannot assign requested address;

启动hadoop出错误 怎么回事

5/05/21 11:19:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

hadoop集群下 spark 启动报错

``` Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 17/09/29 09:24:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder': at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130) at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129) at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126) at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938) at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:938) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:938) at org.apache.spark.repl.Main$.createSparkSession(Main.scala:97) ... 47 elided Caused by: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) ; at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93) at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52) at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35) at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289) at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050) ... 61 more Caused by: java.lang.RuntimeException: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:191) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) ... 70 more Caused by: org.apache.hadoop.fs.ParentNotDirectoryException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970) at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047) at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036) at org.apache.hadoop.hive.ql.exec.Utilities.createDirsWithPermission(Utilities.java:3679) at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:597) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) ... 84 more Caused by: org.apache.hadoop.ipc.RemoteException: /tmp (is not a directory) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:530) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:522) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:497) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1603) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1621) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:542) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:51) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2970) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1078) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy22.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy23.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000) ... 94 more <console>:14: error: not found: value spark import spark.implicits._ ^ <console>:14: error: not found: value spark import spark.sql ^ Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.2.0 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144) Type in expressions to have them evaluated. Type :help for more information. scala> ```

centos6.8搭建hadoop2.X伪分布式无法启动namenode

能够格式化节点信息,但是namenode无法启动。在日志中出现如下错误 ``` STARTUP_MSG: build = Unknown -r Unknown; compiled by 'root' on 2017-05-22T10:49Z STARTUP_MSG: java = 1.8.0_144 ************************************************************/ 2020-01-31 16:37:06,931 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2020-01-31 16:37:06,935 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 2020-01-31 16:37:07,161 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2020-01-31 16:37:07,233 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2020-01-31 16:37:07,233 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2020-01-31 16:37:07,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://hadoop101:9000 2020-01-31 16:37:07,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use hadoop101:9000 to access this namenode/service. 2020-01-31 16:37:07,409 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://huawei_mate_10-53013e4c60:50070 2020-01-31 16:37:07,457 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2020-01-31 16:37:07,464 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2020-01-31 16:37:07,469 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2020-01-31 16:37:07,473 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2020-01-31 16:37:07,475 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: The value of property bind.address must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1134) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1115) at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:398) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:351) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:114) at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:290) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:126) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:752) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:638) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2020-01-31 16:37:07,477 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2020-01-31 16:37:07,479 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop101/192.168.117.101 ************************************************************/ ``` 主要的报错信息是 java.lang.IllegalArgumentException: The value of property bind.address must not be null core-site.xml的配置信息 <configuration> <!-- 指定HDFS中NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop101:9000</value> </property> <!-- hadoop101已经在hosts文件中配置 --> <!-- 指定Hadoop运行时产生文件的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/hadoop-2.7.2/data/tmp</value> </property> </configuration> 希望大神能够帮忙解答一下。万分感谢感谢

java连接hadoop hdfs文件系统报错

报错信息: java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.; Host Details : local host is: "localhost.localdomain/127.0.0.1"; destination host is: "172.16.6.57":9000; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763) at org.apache.hadoop.ipc.Client.call(Client.java:1229) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at $Proxy9.create(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at $Proxy9.create(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:193) at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1324) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1343) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1255) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1212) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:276) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:265) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:82) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:886) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:781) at com.zk.hdfs.FileCopyToHdfs.uploadToHdfs(FileCopyToHdfs.java:44) at com.zk.hdfs.FileCopyToHdfs.main(FileCopyToHdfs.java:21) Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag. at com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:73) at com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124) at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:213) at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:746) at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:238) at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:282) at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:760) at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:288) at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:752) at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:985) at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:938) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:836) 代码是在网上找的: package com.zk.hdfs; import java.io.BufferedInputStream; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.URI; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.util.Progressable; public class FileCopyToHdfs { public static void main(String[] args) throws Exception { try { uploadToHdfs(); //deleteFromHdfs(); //getDirectoryFromHdfs(); // appendToHdfs(); // readFromHdfs(); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { System.out.println("SUCCESS"); } } /**上传文件到HDFS上去*/ public static void uploadToHdfs() throws FileNotFoundException,IOException { String localSrc = "e:/test.txt"; String dst = "hdfs://172.16.6.57:9000/user/abc/zk/test1.txt"; InputStream in = new BufferedInputStream(new FileInputStream(localSrc)); Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(URI.create(dst), conf); OutputStream out = fs.create(new Path(dst), new Progressable() { public void progress() { System.out.print("."); } }); IOUtils.copyBytes(in, out, 4096, true); } } 总是报连接问题,网上搜不到资料,大牛帮下忙啊

hadoop启动dfs时出现问题

刚刚接触hadoop,namenode格式化后,启动hadoop sudo sbin/start-dfs.sh 出现错误: hadoop@qiaoyu-Lenovo-G460:/usr/local/hadoop-2.4.1$ sudo sbin/start-dfs.sh [sudo] password for hadoop: Starting namenodes on [localhost] root@localhost's password: localhost: Permission denied, please try again. 上网找了好久,最多的就是说sudo passwd改变密码 试了之后仍然出现上面的情况,没有变化 急着进行下去,求助啊

小白求助 :hadoop集群slaves上的NodeManager进程自动断

s100 master s101 slaves s102 slaves s103 slaves namenode -format 后start-all.sh 可以在slaves 看到 NodeManager 进程 但是在webui上 通过http://S100:50070看不到datenode节点信息 查看节点slaves上的 yarn-root-nodemanager-S101.log日志发现有报错信息 ``` 2019-06-01 02:20:02,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:03,596 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:04,603 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:05,609 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:06,611 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:07,616 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,624 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Unexpected error starting NodeStatusUpdater java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,632 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,638 INFO org.apache.hadoop.service.AbstractService: Service NodeManager failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,716 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042 2019-06-01 02:20:08,727 INFO org.apache.hadoop.ipc.Server: Stopping server on 39863 2019-06-01 02:20:08,742 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 39863 2019-06-01 02:20:08,748 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,749 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is interrupted. Exiting. 2019-06-01 02:20:08,790 INFO org.apache.hadoop.ipc.Server: Stopping server on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,836 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Public cache exiting 2019-06-01 02:20:08,840 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system... 2019-06-01 02:20:08,842 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped. 2019-06-01 02:20:08,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete. 2019-06-01 02:20:08,843 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,857 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NodeManager at S101/127.0.0.1 ************************************************************/ ``` 哪位大佬帮忙看看,小弟先谢过 如下是slaves 的配置 yarn-site.xml ``` <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>S100</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/disk1/nm-local-dir,/disk2/nm-local-dir</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>16384</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>16</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>S100:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>S100:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>S100:8031</value> </property> </configuration> ``` core-site.xml ``` <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://S100:8020/</value> </property> </configuration> ```

关于hadoop安装的一个问题

当我hadoop namenode -formate之后,我运行start-dfs.sh之后,namenode的日志报错,错误如下: 2011-08-11 18:27:20,179 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) core-site.xml的配置如下: <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/hadooptmp</value> </property> </configuration> hdfs-site.xml的配置如下: <configuration> <property> <name>dfs.name.dir</name> <value>/opt/hadoop/hdfs/name</value> </property> <property> <name>dfs.data.dir</name> <value>/opt/hadoop/hdfs/data</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> mapred-site.xml配置如下: <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:8021</value> </property> <property> <name>mapred.local.dir</name> <value>/opt/hadoop/mapred/local</value> </property> <property> <name>mapred.system.dir</name> <value>/opt/hadoop/mapred/system</value> </property> </configuration> 从日志分析是没有formate,但是我的确formate了。namenode的name目录是不会自动创建的,我手动创建,并且修改了权限为chmod -R 777 hadoop ,所以从读写的权限来也没有问题。但是如果我不手动也会报错,错误是: 2011-08-11 18:37:09,401 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /opt/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) 有没有哪位能帮我解决下这个问题,非常感谢。我怀疑是hadoop权限的问题,希望大牛们赐教。

HDFS启动警告 WARN util.NativeCodeLoader

启动HDFS报入校警告: [hadoop@hadoop tmp]$ start-dfs.sh 15/02/02 20:39:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-namenode-hadoop.out localhost: starting datanode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-datanode-hadoop.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-hadoop.out 15/02/02 20:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 网上查说是liunx版本问题 [hadoop@hadoop ~]$ uname -a Linux hadoop 2.6.32-431.el6.i686 #1 SMP Fri Nov 22 00:26:36 UTC 2013 i686 i686 i386 GNU/Linux JDK版本 [hadoop@hadoop ~]$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) Client VM (build 19.1-b02, mixed mode, sharing) Hadoop版本: [hadoop@hadoop ~]$ hadoop version Hadoop 2.5.2 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0 Compiled by jenkins on 2014-11-14T23:45Z Compiled with protoc 2.5.0 From source with checksum df7537a4faa4658983d397abf4514320 This command was run using /usr/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar 请高手帮忙看下

hadoop使用kerberos认证后,hadoop fs -ls命令行无法使用,求大神帮忙

hadoop版本apache hadoop 2.7.3,jdk-1.7 输入hadoop fs -ls,错误信息如下: hadoop@hadoop01 native]$ hadoop fs -ls 17/08/01 01:33:36 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop01/192.168.148.129"; destination host is: "hadoop01":9000; klist查看凭据缓存,是存在的: [hadoop@hadoop01 native]$ klist Ticket cache: KEYRING:persistent:1001:1001 Default principal: hadoop/hadoop01@HADOOP.COM Valid starting Expires Service principal 08/01/2017 01:12:54 08/02/2017 01:12:54 krbtgt/HADOOP.COM@HADOOP.COM 通过http://192.168.148.129:50070/dfshealth.html#tab-overview访问界面也是OK的: Configured Capacity: 55.38 GB DFS Used: 16 KB (0%) Non DFS Used: 11.4 GB DFS Remaining: 43.99 GB (79.42%) Block Pool Used: 16 KB (0%) DataNodes usages% (Min/Median/Max/stdDev): 0.00% / 0.00% / 0.00% / 0.00% Live Nodes 2 (Decommissioned: 0) Dead Nodes 0 (Decommissioned: 0) Decommissioning Nodes 0 Total Datanode Volume Failures 0 (0 B) Number of Under-Replicated Blocks 0 Number of Blocks Pending Deletion 0 Block Deletion Start Time 2017/8/1 上午10:12:21

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Vue + Spring Boot 项目实战(十九):Web 项目优化解决方案

快来一起探索如何打脸我们的破项目,兄弟姐妹们把害怕打在公屏上!

你连存活到JDK8中著名的Bug都不知道,我怎么敢给你加薪

CopyOnWriteArrayList.java和ArrayList.java,这2个类的构造函数,注释中有一句话 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public ArrayList(Collection&lt;? ...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解!

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解! 目录 博客声明 大数据了解博主粉丝 博主的粉丝群体画像 粉丝群体性别比例、年龄分布 粉丝群体学历分布、职业分布、行业分布 国内、国外粉丝群体地域分布 博主的近期访问每日增量、粉丝每日增量 博客声明 因近期博主写专栏的文章越来越多,也越来越精细,逐步优化文章。因此,最近一段时间,订阅博主专栏的人数增长也非常快,并且专栏价

一个HashMap跟面试官扯了半个小时

一个HashMap能跟面试官扯上半个小时 关注 安琪拉的博客 1.回复面试领取面试资料 2.回复书籍领取技术电子书 3.回复交流领取技术电子书 前言 HashMap应该算是Java后端工程师面试的必问题,因为其中的知识点太多,很适合用来考察面试者的Java基础。 开场 面试官: 你先自我介绍一下吧! 安琪拉: 我是安琪拉,草丛三婊之一,最强中单(钟馗不服)!哦,不对,串场了,我是**,目...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

记录下入职中软一个月(外包华为)

我在年前从上一家公司离职,没想到过年期间疫情爆发,我也被困在家里,在家呆着的日子让人很焦躁,于是我疯狂的投简历,看面试题,希望可以进大公司去看看。 我也有幸面试了我觉得还挺大的公司的(虽然不是bat之类的大厂,但是作为一名二本计算机专业刚毕业的大学生bat那些大厂我连投简历的勇气都没有),最后选择了中软,我知道这是一家外包公司,待遇各方面甚至不如我的上一家公司,但是对我而言这可是外包华为,能...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

培训班出来的人后来都怎么样了?(二)

接着上回说,培训班学习生涯结束了。后面每天就是无休止的背面试题,不是没有头脑的背,培训公司还是有方法的,现在回想当时背的面试题好像都用上了,也被问到了。回头找找面试题,当时都是打印下来天天看,天天背。 不理解呢也要背,面试造飞机,上班拧螺丝。班里的同学开始四处投简历面试了,很快就有面试成功的,刚开始一个,然后越来越多。不知道是什么原因,尝到胜利果实的童鞋,不满足于自己通过的公司,嫌薪水要少了,选择...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

工作八年,月薪60K,裸辞两个月,投简历投到怀疑人生!

近日,有网友在某职场社交平台吐槽,自己裸辞两个月了,但是找工作却让自己的心态都要崩溃了,全部无果,不是已查看无回音,就是已查看不符合。 “工作八年,两年一跳,裸辞两个月了,之前月薪60K,最近找工作找的心态崩了!所有招聘工具都用了,全部无果,不是已查看无回音,就是已查看不符合。进头条,滴滴之类的大厂很难吗???!!!投简历投的开始怀疑人生了!希望 可以收到大厂offer” 先来看看网...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

97年世界黑客编程大赛冠军作品(大小仅为16KB),惊艳世界的编程巨作

这是世界编程大赛第一名作品(97年Mekka ’97 4K Intro比赛)汇编语言所写。 整个文件只有4095个字节, 大小仅仅为16KB! 不仅实现了3D动画的效果!还有一段震撼人心的背景音乐!!! 内容无法以言语形容,实在太强大! 下面是代码,具体操作看最后! @echo off more +1 %~s0|debug e100 33 f6 bf 0 20 b5 10 f3 a5...

程序员是做全栈工程师好?还是专注一个领域好?

昨天,有位大一的同学私信我,说他要做全栈工程师。 我一听,这不害了孩子么,必须制止啊。 谁知,讲到最后,更确定了他做全栈程序员的梦想。 但凡做全栈工程师的,要么很惨,要么很牛! 但凡很牛的,绝不是一开始就是做全栈的! 全栈工程师听起来好听,但绝没有你想象的那么简单。 今天听我来给你唠,记得帮我点赞哦。 一、全栈工程师的职责 如果你学习编程的目的只是玩玩,那随意,想怎么学怎么学。...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

什么是a站、b站、c站、d站、e站、f站、g站、h站、i站、j站、k站、l站、m站、n站?00后的世界我不懂!

A站 AcFun弹幕视频网,简称“A站”,成立于2007年6月,取意于Anime Comic Fun,是中国大陆第一家弹幕视频网站。A站以视频为载体,逐步发展出基于原生内容二次创作的完整生态,拥有高质量互动弹幕,是中国弹幕文化的发源地;拥有大量超粘性的用户群体,产生输出了金坷垃、鬼畜全明星、我的滑板鞋、小苹果等大量网络流行文化,也是中国二次元文化的发源地。 B站 全称“哔哩哔哩(bilibili...

十个摸鱼,哦,不对,是炫酷(可以玩一整天)的网站!!!

文章目录前言正文**1、Kaspersky Cyberthreat real-time map****2、Finding Home****3、Silk – Interactive Generative Art****4、Liquid Particles 3D****5、WINDOWS93****6、Staggering Beauty****7、Ostagram图片生成器网址****8、全历史网址*...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

用了这个 IDE 插件,5分钟解决前后端联调!

点击上方蓝色“程序猿DD”,选择“设为星标”回复“资源”获取独家整理的学习资料!作者 |李海庆我是一个 Web 开发前端工程师,受到疫情影响,今天是我在家办公的第78天。开发了两周,...

大厂的 404 页面都长啥样?最后一个笑了...

每天浏览各大网站,难免会碰到404页面啊。你注意过404页面么?猿妹搜罗来了下面这些知名网站的404页面,以供大家欣赏,看看哪个网站更有创意: 正在上传…重新上传取消 腾讯 正在上传…重新上传取消 网易 淘宝 百度 新浪微博 正在上传…重新上传取消 新浪 京东 优酷 腾讯视频 搜...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

Java14 新特性解读

Java14 已于 2020 年 3 月 17 号发布,官方特性解读在这里:https://openjdk.java.net/projects/jdk/14/以下是个人对于特性的中文式...

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

爬虫(101)爬点重口味的

小弟最近在学校无聊的很哪,浏览网页突然看到一张图片,都快流鼻血。。。然后小弟冥思苦想,得干一点有趣的事情python 爬虫库安装https://s.taobao.com/api?_ks...

工作两年简历写成这样,谁要你呀!

作者:小傅哥 博客:https://bugstack.cn 沉淀、分享、成长,让自己和他人都能有所收获! 一、前言 最近有伙伴问小傅哥,我的简历怎么投递了都没有反应,心里慌的很呀。 工作两年了目前的公司没有什么大项目,整天的维护别人的代码,有坑也不让重构,都烦死了。荒废我一身技能无处施展,投递的简历也没人看。我是不动物园里的猩猩,狒狒了! 我要加班,我要996,我要疯狂编码,求给我个机会… ...

相关热词 c# 开发接口 c# 中方法上面的限制 c# java 时间戳 c#单元测试入门 c# 数组转化成文本 c#实体类主外键关系设置 c# 子函数 局部 c#窗口位置设置 c# list 查询 c# 事件 执行顺序
立即提问
相关内容推荐