hadoop Master节点namenode进程没有启动 30C

刚开始单节点的时候全部都能启动,但是部署完子节点重新启动后,就无法启动主节点Master中的namenode进程了,而且通过查看日志也没有发现错误,求大神指点!!
这是日志中的内容
图片说明
启动的时候
图片说明

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hadoop3.1.0分布式环境搭建
1,环境VMWare,CentOS6.5, JDK1.8(oracle), Hadoop3.1.0 在master节点使用start-dfs.sh时,只会启动master节点的namenode和datanode,以及slave1节点的secondarynamenode,使用start-dfs.sh时能启动master的namenode和datanode,以及slave1上的secondarynamenode. 其它所有子节点的datanode进程都不会启动,必须在子节点上使用 hdfs --daemon start datanode 命令手动启动datanode. master节点namenode和datanode的日志中未出现任何异常情况。 [root@master hadoop-3.1.0]# start-dfs.sh Starting namenodes on [master] Starting datanodes Starting secondary namenodes [node1] 2019-07-14 11:04:46,521 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@master hadoop-3.1.0]# jps 29025 NameNode 29147 DataNode 29420 Jps 问题: 为什么无法通过start-dfs.sh启动集群中其它slave节点中的datanode,而又能启动slave中的secondarynamenode? 请各位大神帮忙看看。 ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563100513_35604.png)
spark主节点配置一定与hadoop主节点配置一样吗?依赖zookeeper吗?
我用的是hadoop-2.5cdh5.3,集群namenode在01主机上,在配置spark的时候我将spark master指向了03节点, 配置完毕后,将hadoop集群,yarn依次启动,然后启动spark做示例运行总是出现启动不正常的问题,所以请有经验的人指明一下,是不是一定要和hadoop的主节点一样呢?zookeeper是不是也一定要启动。图是我的示例运行,页面查询发现此任务运行是被killed的状态。 ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391119_876804.png) ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391055_88906.png) ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391145_543234.png) ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445391080_514411.png) worker的deploy日志 ![图片说明](https://img-ask.csdn.net/upload/201510/21/1445398054_94598.png)
hbase0.98.15+hadoop2.6+zookeeper3.4.6环境搭建问题
系统:centos 7 分布式环境,现有3个节点,主节点node0,hadoop中node0,node1,node2同时作为datanode的节点 启动hbase时主节点没有Hmaster线程 hbase的日志错误如下: ``` 2015-11-13 16:23:59,178 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,node0:2181,node2:2181 sessionTimeout=60000 watcher=master:600000x0, quorum=node1:2181,node0:2181,node2:2181, baseZNode=/hbase 2015-11-13 16:23:59,195 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Opening socket connection to server node1/192.168.0.161:2181. Will not attempt to authenticate using SASL (unknown error) 2015-11-13 16:23:59,200 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Socket connection established to node1/192.168.0.161:2181, initiating session 2015-11-13 16:23:59,211 INFO [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Session establishment complete on server node1/192.168.0.161:2181, sessionid = 0x150feccba590005, negotiated timeout = 40000 2015-11-13 16:23:59,223 INFO [main] zookeeper.RecoverableZooKeeper: Node /hbase already exists and this is not a retry 2015-11-13 16:23:59,245 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting 2015-11-13 16:23:59,246 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: starting 2015-11-13 16:23:59,301 INFO [master:namenode:60000] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2015-11-13 16:23:59,344 INFO [master:namenode:60000] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2015-11-13 16:23:59,346 INFO [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2015-11-13 16:23:59,346 INFO [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-13 16:23:59,355 INFO [master:namenode:60000] http.HttpServer: Jetty bound to port 60010 2015-11-13 16:23:59,355 INFO [master:namenode:60000] mortbay.log: jetty-6.1.26 2015-11-13 16:23:59,656 INFO [master:namenode:60000] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 2015-11-13 16:23:59,724 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available 2015-11-13 16:23:59,725 INFO [master:namenode:60000] master.ActiveMasterManager: Registered Active Master=namenode,60000,1447403038605 2015-11-13 16:23:59,731 INFO [master:namenode:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 2015-11-13 16:23:59,875 FATAL [master:namenode:60000] master.HMaster: Unhandled exception. Starting shutdown. java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.<init>(Ljava/util/zip/Checksum;II)V at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1342) at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371) at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1403) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1382) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1307) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:384) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:380) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:380) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:324) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775) at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:664) at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:642) at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:599) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:481) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:154) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684) at java.lang.Thread.run(Thread.java:745) 2015-11-13 16:23:59,876 INFO [master:namenode:60000] master.HMaster: Aborting 2015-11-13 16:23:59,877 DEBUG [master:namenode:60000] master.HMaster: Stopping service threads 2015-11-13 16:23:59,877 INFO [master:namenode:60000] ipc.RpcServer: Stopping server on 60000 2015-11-13 16:23:59,877 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping 2015-11-13 16:23:59,877 INFO [master:namenode:60000] master.HMaster: Stopping infoServer 2015-11-13 16:23:59,877 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped 2015-11-13 16:23:59,877 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 2015-11-13 16:23:59,879 INFO [master:namenode:60000] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010 2015-11-13 16:23:59,994 INFO [master:namenode:60000] zookeeper.ZooKeeper: Session: 0x150feccba590005 closed 2015-11-13 16:23:59,994 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2015-11-13 16:23:59,994 INFO [master:namenode:60000] master.HMaster: HMaster main thread exiting 2015-11-13 16:23:59,995 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:201) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3062) ``` 每个节点上hadoop和zookeeper的进程都有,hregionserver进程也有,就是主节点没有hmaster,配置文件什么的已经对过很多遍了 另外有一个奇怪的问题是,每次系统重启过后,启动hadoop,可以看到相应的进程,但是在外部无法访问http://node0:50070端口,必须要输入iptables -F才可以访问,zookeeper启动过后,如果没有输入过上述命令,zookeeper的MODE也显示不出来,也就是集群模式没有启动成功
新手,hadoop上运行wordcount程序报错
运行的环境是:Ubuntu14.04+hadoop2.6.1 用的是virtualBox虚拟机,然后安装了一个master和三个slave节点 hadoop是可以成功启动的,没有任何问题 在Ubuntu安装了eclipse,用java写了word count的程序,源码如下: ``` package wordcount; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; /** * @author * @version 创建时间:2017年9月9日 上午8:50:51 类说明 */ public class Wordcount { public static class TokenizerMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { StringTokenizer line = new StringTokenizer(value.toString()); while (line.hasMoreTokens()) { word.set(line.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); protected void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable obj : values) { sum += obj.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(Wordcount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/user/hduser/demo/test.txt")); FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/user/hduser/demo/wordcount")); //FileInputFormat.addInputPath(job, new Path(args[0])); //FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 启动hadoop后,在eclipse中直接运行上面的程序,运行成功,生成了wordcount文件夹,里面有_SUCCESS文件,也有统计的结果文件 然后我想把程序打包成jar文件来运行,先把上面程序中的: ``` FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/user/hduser/demo/test.txt")); FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/user/hduser/demo/wordcount")); ``` 改成如下: ``` FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); ``` 就是通过终端输入这两个参数 用eclipse的export打包成jar文件,后在终端输入: ``` hadoop jar wordcount.jar wordcount.Wordcount hdfs://master:9000/user/hduser/demo/test.txt hdfs://master:9000/user/hduser/demo/wordcount ``` 运行就报错了,报错情况如下: ``` 17/09/09 11:18:53 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.56.100:8050 17/09/09 11:18:54 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 17/09/09 11:18:55 INFO input.FileInputFormat: Total input paths to process : 1 17/09/09 11:18:55 INFO mapreduce.JobSubmitter: number of splits:1 17/09/09 11:18:55 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1504926710828_0001 17/09/09 11:18:56 INFO impl.YarnClientImpl: Submitted application application_1504926710828_0001 17/09/09 11:18:56 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1504926710828_0001/ 17/09/09 11:18:56 INFO mapreduce.Job: Running job: job_1504926710828_0001 17/09/09 11:19:14 INFO mapreduce.Job: Job job_1504926710828_0001 running in uber mode : false 17/09/09 11:19:14 INFO mapreduce.Job: map 0% reduce 0% 17/09/09 11:19:14 INFO mapreduce.Job: Job job_1504926710828_0001 failed with state FAILED due to: Application application_1504926710828_0001 failed 2 times due to AM Container for appattempt_1504926710828_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://master:8088/proxy/application_1504926710828_0001/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1504926710828_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) at java.lang.Thread.run(Thread.java:748) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 17/09/09 11:19:14 INFO mapreduce.Job: Counters: 0 ``` 去查了下日志文件, ``` 2017-09-09 11:18:55,869 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job.xml is closed by DFSClient_NONMAPREDUCE_-1306163227_1 2017-09-09 11:18:59,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2017-09-09 11:18:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2017-09-09 11:19:12,241 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.56.102:53610 Call#7 Retry#0: java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist 2017-09-09 11:19:12,293 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.56.102:53610 Call#8 Retry#0: java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist 2017-09-09 11:19:29,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2017-09-09 11:19:29,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.56.100 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-09 11:19:42,634 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 29 2017-09-09 11:19:42,635 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 40 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 27 SyncTimes(ms): 545 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 40 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 28 SyncTimes(ms): 613 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_inprogress_0000000000000000029 -> /usr/local/hadoop/hadoop_data/hdfs/namenode/current/edits_0000000000000000029-0000000000000000068 2017-09-09 11:19:42,704 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 69 2017-09-09 11:19:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2017-09-09 11:19:59,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:20:29,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2017-09-09 11:20:29,504 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.56.100 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 69 2017-09-09 11:20:42,759 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 24 2017-09-09 11:20:42,791 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 56 ``` 在这里面报了一个错误: ``` java.io.FileNotFoundException: File does not exist: /tmp/hadoop-yarn/staging/hduser/.staging/job_1504926710828_0001/job_1504926710828_0001_1.jhist ``` 新手,不知道怎么办了
小白求助 :hadoop集群slaves上的NodeManager进程自动断
s100 master s101 slaves s102 slaves s103 slaves namenode -format 后start-all.sh 可以在slaves 看到 NodeManager 进程 但是在webui上 通过http://S100:50070看不到datenode节点信息 查看节点slaves上的 yarn-root-nodemanager-S101.log日志发现有报错信息 ``` 2019-06-01 02:20:02,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:03,596 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:04,603 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:05,609 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:06,611 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:07,616 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,620 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: S100/192.168.17.100:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-06-01 02:20:08,624 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Unexpected error starting NodeStatusUpdater java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,632 INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,638 INFO org.apache.hadoop.service.AbstractService: Service NodeManager failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,716 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042 2019-06-01 02:20:08,727 INFO org.apache.hadoop.ipc.Server: Stopping server on 39863 2019-06-01 02:20:08,742 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 39863 2019-06-01 02:20:08,748 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,749 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is interrupted. Exiting. 2019-06-01 02:20:08,790 INFO org.apache.hadoop.ipc.Server: Stopping server on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8040 2019-06-01 02:20:08,796 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2019-06-01 02:20:08,836 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Public cache exiting 2019-06-01 02:20:08,840 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system... 2019-06-01 02:20:08,842 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped. 2019-06-01 02:20:08,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete. 2019-06-01 02:20:08,843 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543) Caused by: java.net.ConnectException: Call From S101/127.0.0.1 to S100:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy73.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy74.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:271) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1452) ... 18 more 2019-06-01 02:20:08,857 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NodeManager at S101/127.0.0.1 ************************************************************/ ``` 哪位大佬帮忙看看,小弟先谢过 如下是slaves 的配置 yarn-site.xml ``` <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>S100</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/disk1/nm-local-dir,/disk2/nm-local-dir</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>16384</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>16</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>S100:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>S100:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>S100:8031</value> </property> </configuration> ``` core-site.xml ``` <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://S100:8020/</value> </property> </configuration> ```
hadoop2.x集群部署一种一个datanode无法启动
Exception in secureMain java.net.UnknownHostException: node1: node1 at java.net.InetAddress.getLocalHost(InetAddress.java:1473) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:187) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:207) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2153) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402) Caused by: java.net.UnknownHostException: node1 at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293) at java.net.InetAddress.getLocalHost(InetAddress.java:1469) ... 6 more 2015-01-16 09:08:54,152 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-01-16 09:08:54,164 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: node1: node1 ************************************************************/ 环境ubuntu,hadoop2.6,jdk7 [排比句](http://www.zaojuzi.com/paibiju/ "")部署三台虚拟机一台namenode,两台datanode;/etc/hostname 都已经配置分布为master,node1,node2 /etc/hosts配置为: 27.0.0.1 localhost 127.0.1.1 ubuntu.localdomain ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.184.129 master 192.168.184.130 node1 192.168.184.131 node2 hadoop/etc/hadoo/slaves配置为[造句](http://www.zaojuzi.com/ ""): node1 node2 core-site.xml配置为: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yangwq/hadoop-2.6.0/temp</value> <description>A base for other temporary directories.</description> </property> </configuration> hdfs-site.xml配置为: <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/yangwq/hadoop-2.6.0/dfs/data</value> </property> </configuration> mapred-site.xml配置为: <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> </configuration> yarn-site.xml配置为: <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!-- resourcemanager hostname或ip地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> </configuration> 在启动的时候node1节点的datanode一直无法启动,同时通过ssh登录各节点都是正常。
求助,hbase自带的zookeeper在哪里启动配置
我在试着搭一个hbase,单机模式的,所以用自带的zookeeper,hadoop配好了,但是启动start-hbase.cmd时总是报错。 报错信息是这样的: 2017-12-06 16:38:11,275 INFO [M:0;WIN-52F19LCA43O:55968] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 2017-12-06 16:38:18,010 INFO [SessionTracker] server.ZooKeeperServer: Expiring session 0x1602af5c9830003, timeout of 10000ms exceeded 2017-12-06 16:38:18,010 INFO [SessionTracker] server.ZooKeeperServer: Expiring session 0x1602af5c9830001, timeout of 10000ms exceeded 2017-12-06 16:38:18,010 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor : Processed session termination for sessionid: 0x1602af5c9830003 2017-12-06 16:38:18,010 INFO [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x1602af5c9830001 2017-12-06 16:38:40,606 FATAL [WIN-52F19LCA43O:55968.activeMasterManager] master.HMaster:Failed to become active master java.io.IOException: Mkdirs failed to create file:/localhost:9000/.tmp (exists=false, cwd=file:/F:/360Downloads/hbase-1.2.3/bin) 启动之前的节点是这样的: F:\360Downloads\hbase-1.2.3\bin>jps 11232 NameNode 12592 Jps 9412 DataNode F:\360Downloads\hbase-1.2.3\bin>start-hbase.cmd 好像是这个zookeeper从来没有连上过,一直是空的,求各位帮忙看一下。
hdfs 上传/下载文件报错
上传 不报错。但是hdfs 上面的文件大小为0. 下载的时候 org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-127181180-172.17.0.2-1526283881280:blk_1073741825_1001 file=/wing/LICENSE.txt at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:946) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:604) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2030) at hadoop.hdfs.HdfsTest.testDownLoad(HdfsTest.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) 求各位大佬帮忙看看。环境是在docker 上部署的, 1主2丛。jps查看 datanode和nameNode 都是活的。 网页访问50070 也能进入web 也,里面显示的节点也是活的。 在服务器上用 hdfs dfs -put /get 操作可以成功。 java 代码如下: private FileSystem createClient() throws IOException { //设置程序执行的用户为root System.setProperty("HADOOP_USER_NAME","root"); //指定NameNode的地址 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://Master:9000"); //创建hdfs客户端 return FileSystem.get(conf); } @Test public void testMkdir() throws IOException { FileSystem client = createClient(); //创建目录 client.mkdirs(new Path(PATH)); //关闭客户端 client.close(); } @Test public void testUpload() throws Exception{ FileSystem client = createClient(); //构造输入流 InputStream in = new FileInputStream("d:\\bcprov-jdk16-1.46.jar"); //构造输出流 OutputStream out = client.create(new Path(PATH +"/bcprov.jar")); //上传 IOUtils.copyBytes(in, out ,1024); //关闭客户端 client.close(); } @Test public void testDownLoad() throws Exception{ FileSystem client = createClient(); //构造输入流 FSDataInputStream in = client.open(new Path("hdfs://Master:9000/wing/LICENSE.txt")); //构造输出流 OutputStream out = new FileOutputStream("d:\\hello.txt"); //下载 IOUtils.copyBytes(in, System.out, 1024, false); IOUtils.closeStream(in); IOUtils.closeStream(out); //关闭客户端 client.close(); } 其他api 调用是可以的,比如创建目录。
在中国程序员是青春饭吗?
今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...
深析Synchronized关键字(小白慎入,深入jvm源码,两万字长文)
目录一、synchronized基础1.1synchronized的使用1.1示例1.2验证1.2.1 普通方法和代码块中使用this是同一个监视器(锁),即某个具体调用该代码的对象1.2.2 静态方法和代码块中使用该类的class对象是同一个监视器,任何该类的对象调用该段代码时都是在争夺同一个监视器的锁定1.2、synchronized的特点二、synchronized进阶2.1对象头2.2sy
GitHub 总星 4w+!删库?女装?表情包?这些沙雕中文项目真是我每天快乐的源泉!
大家好,我是 Rocky0429,一个喜欢在 GitHub 上瞎逛的蒟蒻… 好看的皮囊千篇一律,有趣的灵魂没有底线。作为全球最大的同性交友网站,GayHub GitHub 上不止有鲜活的代码,秃头的算法,还有很多拥有有(sha)趣(diao)灵魂的宝藏。 还记得我之前给大家介绍的 Sorry 项目嘛,一个可以自己做表情包的项目,这个的沙雕程度在下面这些项目面前只能算弟弟。虽然说沙雕不分国...
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
20道你必须要背会的微服务面试题,面试一定会被问到
这篇博客总结了面试中最常见的微服务面试题,相信对你有所帮助。
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
2020 年,大火的 Python 和 JavaScript 是否会被取而代之?
Python 和 JavaScript 是目前最火的两大编程语言,但是2020 年,什么编程语言将会取而代之呢? 作者 |Richard Kenneth Eng 译者 |明明如月,责编 | 郭芮 出品 | CSDN(ID:CSDNnews) 以下为译文: Python 和 JavaScript 是目前最火的两大编程语言。然而,他们不可能永远屹立不倒。最终,必将像其他编程语言一...
C语言数字图像处理---1.4直方图拉伸和直方图均衡化
本篇将延续上一篇的内容,对直方图进行扩展,讲述直方图拉伸和直方图均衡化两个内容,并通过简单的C语言来实现这两个基础功能,让初学者通俗易懂。
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
没用过这些 IDEA 插件?怪不得写代码头疼
使用插件,可以提高开发效率。对于开发人员很有帮助。这篇博客介绍了IDEA中最常用的一些插件。
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
推荐一些有趣的在线编程游戏
1.Robocode 让坦克们互相博弈的游戏,你可以看到它们飞奔,碾碎一切挡道的东西。机器人配有雷达与火炮,选手在躲避对手进攻的同时攻击对手,以此来较量得分的多少。这个游戏很有意思,曾经令我沉迷… 你可以用Java、Scala、C#等编程语言,编写人工智能程序,驱动机器人。 2.Code Combat Code Combat是一款学习编程的角色扮演游戏。每一关都用任务的形式设立目标,用实时的反馈...
工作十年的数据分析师被炒,没有方向,你根本躲不过中年危机
2020年刚刚开始,就意味着离职潮高峰的到来,我身边就有不少人拿着年终奖离职了,而最让我感到意外的,是一位工作十年的数据分析师也离职了,不同于别人的主动辞职,他是被公司炒掉的。 很多人都说数据分析是个好饭碗,工作不累薪资高、入门简单又好学。然而今年34的他,却真正尝到了中年危机的滋味,平时也有不少人都会私信问我: 数据分析师也有中年危机吗?跟程序员一样是吃青春饭的吗?该怎么保证自己不被公司淘汰...
作为一名大学生,如何在B站上快乐的学习?
B站是个宝,谁用谁知道???? 作为一名大学生,你必须掌握的一项能力就是自学能力,很多看起来很牛X的人,你可以了解下,人家私底下一定是花大量的时间自学的,你可能会说,我也想学习啊,可是嘞,该学习啥嘞,不怕告诉你,互联网时代,最不缺的就是学习资源,最宝贵的是啥? 你可能会说是时间,不,不是时间,而是你的注意力,懂了吧! 那么,你说学习资源多,我咋不知道,那今天我就告诉你一个你必须知道的学习的地方,人称...
那些年,我们信了课本里的那些鬼话
教材永远都是有错误的,从小学到大学,我们不断的学习了很多错误知识。 斑羚飞渡 在我们学习的很多小学课文里,有很多是错误文章,或者说是假课文。像《斑羚飞渡》: 随着镰刀头羊的那声吼叫,整个斑羚群迅速分成两拨,老年斑羚为一拨,年轻斑羚为一拨。 就在这时,我看见,从那拨老斑羚里走出一只公斑羚来。公斑羚朝那拨年轻斑羚示意性地咩了一声,一只半大的斑羚应声走了出来。一老一少走到伤心崖,后退了几步,突...
张朝阳回应迟到 1 分钟罚 500:资本家就得剥削员工
loonggg读完需要2分钟速读仅需 1 分钟大家我,我是你们的校长。前几天,搜狐的董事局主席兼 CEO 张朝阳和搜狐都上热搜了。原因很简单,就是搜狐出了“考勤新规”。一封搜狐对员工发布...
一个程序在计算机中是如何运行的?超级干货!!!
强烈声明:本文很干,请自备茶水!???? 开门见山,咱不说废话! 你有没有想过,你写的程序,是如何在计算机中运行的吗?比如我们搞Java的,肯定写过这段代码 public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } ...
【蘑菇街技术部年会】程序员与女神共舞,鼻血再次没止住。(文末内推)
蘑菇街技术部的年会,别开生面,一样全是美女。
那个在阿里养猪的工程师,5年了……
简介: 在阿里,走过1825天,没有趴下,依旧斗志满满,被称为“五年陈”。他们会被授予一枚戒指,过程就叫做“授戒仪式”。今天,咱们听听阿里的那些“五年陈”们的故事。 下一个五年,猪圈见! 我就是那个在养猪场里敲代码的工程师,一年多前我和20位工程师去了四川的猪场,出发前总架构师慷慨激昂的说:同学们,中国的养猪产业将因为我们而改变。但到了猪场,发现根本不是那么回事:要个WIFI,没有;...
为什么程序猿都不愿意去外包?
分享外包的组织架构,盈利模式,亲身经历,以及根据一些外包朋友的反馈,写了这篇文章 ,希望对正在找工作的老铁有所帮助
Java校招入职华为,半年后我跑路了
何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...
世界上有哪些代码量很少,但很牛逼很经典的算法或项目案例?
点击上方蓝字设为星标下面开始今天的学习~今天分享四个代码量很少,但很牛逼很经典的算法或项目案例。1、no code 项目地址:https://github.com/kelseyhight...
​两年前不知如何编写代码的我,现在是一名人工智能工程师
全文共3526字,预计学习时长11分钟 图源:Unsplash 经常有小伙伴私信给小芯,我没有编程基础,不会写代码,如何进入AI行业呢?还能赶上AI浪潮吗? 任何时候努力都不算晚。 下面,小芯就给大家讲一个朋友的真实故事,希望能给那些处于迷茫与徘徊中的小伙伴们一丝启发。(下文以第一人称叙述) 图源:Unsplash 正如Elsa所说,职业转换是...
强烈推荐10本程序员必读的书
很遗憾,这个春节注定是刻骨铭心的,新型冠状病毒让每个人的神经都是紧绷的。那些处在武汉的白衣天使们,尤其值得我们的尊敬。而我们这些窝在家里的程序员,能不外出就不外出,就是对社会做出的最大的贡献。 有些读者私下问我,窝了几天,有点颓丧,能否推荐几本书在家里看看。我花了一天的时间,挑选了 10 本我最喜欢的书,你可以挑选感兴趣的来读一读。读书不仅可以平复恐惧的压力,还可以对未来充满希望,毕竟苦难终将会...
作为一个程序员,内存的这些硬核知识你必须懂!
我们之前讲过CPU,也说了CPU和内存的那点事儿,今天咱就再来说说有关内存,作为一个程序员,你必须要懂的哪那些硬核知识! 大白话聊一聊,很重要! 先来大白话的跟大家聊一聊,我们这里说的内存啊,其实就是说的我们电脑里面的内存条,所以嘞,内存就是内存条,数据要放在这上面才能被cpu读取从而做运算,还有硬盘,就是电脑中的C盘啥的,一个程序需要运行的话需要向内存申请一块独立的内存空间,这个程序本身是存放在...
非典逼出了淘宝和京东,新冠病毒能够逼出什么?
loonggg读完需要5分钟速读仅需 2 分钟大家好,我是你们的校长。我知道大家在家里都憋坏了,大家可能相对于封闭在家里“坐月子”,更希望能够早日上班。今天我带着大家换个思路来聊一个问题...
牛逼!一行代码居然能解决这么多曾经困扰我半天的算法题
春节假期这么长,干啥最好?当然是折腾一些算法题了,下面给大家讲几道一行代码就能解决的算法题,当然,我相信这些算法题你都做过,不过就算做过,也是可以看一看滴,毕竟,你当初大概率不是一行代码解决的。 学会了一行代码解决,以后遇到面试官问起的话,就可以装逼了。 一、2 的幂次方 问题描述:判断一个整数 n 是否为 2 的幂次方 对于这道题,常规操作是不断这把这个数除以 2,然后判断是否有余数,直到 ...
Spring框架|JdbcTemplate介绍
文章目录一、JdbcTemplate 概述二、创建对象的源码分析三、JdbcTemplate操作数据库 一、JdbcTemplate 概述 在之前的web学习中,学习了手动封装JDBCtemplate,其好处是通过(sql语句+参数)模板化了编程。而真正的JDBCtemplete类,是Spring框架为我们写好的。 它是 Spring 框架中提供的一个对象,是对原始 Jdbc API 对象的简单...
谁说程序员不懂浪漫——我的C语言结婚请柬(附源码)
前言:但行好事,莫问前程——《增广贤文》 从上学起开始学C++,后面也做过H5,现在做Android。无论是学习用的,还是工作用的,上百个软件不止。但最另我骄傲的是,我用程序烂漫了一把。 用C++语言,利用WIN32框架写一个结婚请柬,文末附源码和使用方法,大家可以自行修改,记得帮我点赞哦。 点开程序,你的电脑像中毒一般,漫天的樱花从屏幕上方,伴随着歌声《今天你要嫁给我》,缓缓落下。 ...
为什么说程序员做外包没前途?
之前做过不到3个月的外包,2020的第一天就被释放了,2019年还剩1天,我从外包公司离职了。我就谈谈我个人的看法吧。首先我们定义一下什么是有前途 稳定的工作环境 不错的收入 能够在项目中不断提升自己的技能(ps:非技术上的认知也算) 找下家的时候能找到一份工资更高的工作 如果你目前还年轻,但高不成低不就,只有外包offer,那请往下看。 外包公司你应该...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
终于!疫情之下,第一批企业没能熬住面临倒闭,员工被遣散,没能等来春暖花开!
先来看一个图: 这个春节,我同所有人一样,不仅密切关注这次新型肺炎,还同时关注行业趋势和企业。在家憋了半个月,我选择给自己看书充电。因为在疫情之后,行业竞争会更加加剧,必须做好未雨绸缪,时刻保持充电。 看了今年的情况,突然想到大佬往年经典语录: 马云:未来无业可就,无工可打,无商可务 李彦宏:人工智能时代,有些专业将被淘汰,还没毕业就失业 马化腾:未来3年将大洗牌,迎21世界以来最大失业潮 王...
立即提问