hadoop集群start-yarn.sh报错,目测是jdk的原因,求指导

之前jdk的安装目录为/usr/local/jdk1.7.0_80,后来新建了一个文件夹jdk把jdk1.7.0_80放进文件夹里了/usr/local/jdk/jdk1.7.0_80

/etc/profile的JAVA_HOME也改了也source了_图片说明

which java也对

图片说明

还有hadoop-env.sh的JAVA_HOME也改了

图片说明

我先start-dfs.sh,没问题

图片说明

再start-yarn.sh就报错了

图片说明

请大佬指导

1个回答

问题解决了,yarn-site.sh原先配置了JAVA_HOME,改完了就好了

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
初入Hadoop,start-all.sh问题

Hadoop 版本:hadoop-2.6.5 环境: ![图片说明](https://img-ask.csdn.net/upload/201711/08/1510104898_307684.png) HODOOP_HOME 目录也不同:都在各自用户的目录下 e.g: /home/yann/hadoop /home/ubuntu01/hadoop /home/ubuntu02/hadoop - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 当我在启动的时候start-all.sh 提示我如下错误: yann@yann-laptop:~/develop/tool/hadoop-2.6.5/sbin$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [yann-laptop] yann-laptop: namenode running as process 7041. Stop it first. yann@ubuntu01-virtual-machine's password: The authenticity of host 'ubuntu02-virtual-machine (192.168.2.182)' can't be established. ECDSA key fingerprint is 57:ee:92:a8:85:85:ef:16:26:a3:b7:1d:54:77:19:18. Are you sure you want to continue connecting (yes/no)? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - hadoop/etc/hadoop/slaves: ubuntu01-virtual-machine ubuntu02-virtual-machine - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - SSH公钥已经配置好: ssh-rsa XXXXX......XXXXXX yann@yann-laptop ssh-rsa XXXXX......XXXXXX ubuntu01@ubuntu01-virtual-machine ssh-rsa XXXXX......XXXXXX ubuntu02@ubuntu02-virtual-machine - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 从提示来看它使用了yann去连接ubuntu01-virtual-machine。 请问这种情况如何配置(主从机器不同的用户名)?

hadoop启动start-dfs.sh找不到命令

[root@sparkproject1 sbin]# start-dfs.sh -bash: start-dfs.sh: command not found hadoop-env.sh已经配置java home hadoop version 可以看到版本号

hadoop start-all.sh问题

Hadoop 版本:hadoop-2.6.5 环境: ![图片说明](https://img-ask.csdn.net/upload/201711/08/1510127732_32215.png) HODOOP_HOME 目录也不同:都在各自用户的目录下 e.g: /home/yann/hadoop /home/ubuntu01/hadoop /home/ubuntu02/hadoop 当我在启动的时候start-all.sh 提示我如下错误: yann@yann-laptop:~/develop/tool/hadoop-2.6.5/sbin$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [yann-laptop] yann-laptop: namenode running as process 7041. Stop it first. yann@ubuntu01-virtual-machine's password: The authenticity of host 'ubuntu02-virtual-machine (192.168.2.182)' can't be established. ECDSA key fingerprint is 57:ee:92:a8:85:85:ef:16:26:a3:b7:1d:54:77:19:18. Are you sure you want to continue connecting (yes/no)? hadoop/etc/hadoop/slaves: ubuntu01-virtual-machine ubuntu02-virtual-machine SSH公钥已经配置好: ssh-rsa XXXXX......XXXXXX yann@yann-laptop ssh-rsa XXXXX......XXXXXX ubuntu01@ubuntu01-virtual-machine ssh-rsa XXXXX......XXXXXX ubuntu02@ubuntu02-virtual-machine 从提示来看它使用了yann去连接ubuntu01-virtual-machine。 请问这种情况如何配置?或者这种配置不提倡? (1)主从机器不同的用户名 (当我把slaves文件改成 ubuntu01@ubuntu01-virtual-machine , 以上问题没有了,随之而来的是,它在我的ubuntu01这台机子上区找/home/yann/hadoop这个目录,当然没有,所以报了目录找不到,也就是下面第二点是否可行) (2)hadoop安装目录不同 (貌似看到很多朋友把主从机器上的hadoop都是装在/usr/下)

Hadoop搭建集群环境用start-all.sh启动时报如下错误

![图片说明](https://img-ask.csdn.net/upload/201512/28/1451311731_549842.png) 看了下,应该是start-dfs.sh脚本的问题,但是里面没有配置绝对路径....

Mac spark安装之后,运行./start-all.sh提示如下,大佬们该怎么搞?

./start-all.sh: line 29: /usr/local/Cellar/spark/2.1.0/bin:/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/bin:/Developer/NVIDIA/CUDA-8.0/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin/sbin/spark-config.sh: No such file or directory ./start-all.sh: line 32: /usr/local/Cellar/spark/2.1.0/bin:/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/bin:/Developer/NVIDIA/CUDA-8.0/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin/sbin/start-master.sh: No such file or directory ./start-all.sh: line 35: /usr/local/Cellar/spark/2.1.0/bin:/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/bin:/Developer/NVIDIA/CUDA-8.0/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin/sbin/start-slaves.sh: No such file or directory

hadoop运行start-dfs.sh时,报master: ERROR: JAVA_HOME is not set and could not be found.

我已经在hadoop _ env.sh中设置了JAVA _ HOME的绝对路径,还是会报错 ``` ### # Generic settings for HADOOP ### # Technically, the only required environment variable is JAVA_HOME. # All others are optional. However, the defaults are probably not # preferred. Many sites configure these options outside of Hadoop, # such as in /etc/profile.d # The java implementation to use. By default, this environment # variable is REQUIRED on ALL platforms except OS X! export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_211 ``` 我的虚拟机VMware Workstation14.0.0 linux是ubuntu 12.04 desktop amd64 jdk是jdk-1.8.0_211 hadoop是 3.1.2

hadoop启动dfs时出现问题

刚刚接触hadoop,namenode格式化后,启动hadoop sudo sbin/start-dfs.sh 出现错误: hadoop@qiaoyu-Lenovo-G460:/usr/local/hadoop-2.4.1$ sudo sbin/start-dfs.sh [sudo] password for hadoop: Starting namenodes on [localhost] root@localhost's password: localhost: Permission denied, please try again. 上网找了好久,最多的就是说sudo passwd改变密码 试了之后仍然出现上面的情况,没有变化 急着进行下去,求助啊

Diagnostics: Exception from container-launch.并且报错后namenode自动关闭

Application Attempt State: FAILED AM Container: container_1548512874258_0001_02_000001 Node: N/A Tracking URL: History Diagnostics Info: AM Container for appattempt_1548512874258_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://hadoop101:8088/cluster/app/application_1548512874258_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1548512874258_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) at org.apache.hadoop.util.Shell.run(Shell.java:456) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt Blacklisted Nodes: -

Hadoop启动YARN时报错ResourceManager和NodeManager 找不到主类,提示找不到/bin/yarn

启动start-dfs.sh没问题,启动start-yarn.sh就报错。 找不到主类,找不到/bin/yarn。 这是日志 ![图片说明](https://img-ask.csdn.net/upload/202005/10/1589120050_430441.png) ![图片说明](https://img-ask.csdn.net/upload/202005/10/1589120075_885132.png)

求hadoop-eclipse-plugin-3.2.1.jar插件

求hadoop-eclipse-plugin-3.2.1.jar插件

Hadoop集群执行wordcount出现的一些报错信息

我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!

Hadoop 同时启动HDFS、YARN出错

``` liuye@liuye-VirtualBox:~$ start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 Starting namenodes on [] liuye@localhost's password: localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-liuye-namenode-liuye-VirtualBox.out localhost: /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 liuye@localhost's password: localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-liuye-datanode-liuye-VirtualBox.out localhost: /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-liuye-resourcemanager-liuye-VirtualBox.out /usr/local/hadoop/bin/yarn: 行 284: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 liuye@localhost's password: localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-liuye-nodemanager-liuye-VirtualBox.out localhost: /usr/local/hadoop/bin/yarn: 行 284: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 liuye@liuye-VirtualBox:~$ ```

spring-hadoop.xsd在哪

江湖救急,spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪spring-hadoop.xsd在哪

hadoop3.0 运行yarn jar 3.0.0-alpha2.jar pi 10 100

2017-05-17 19:07:12,789 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-brody/nm-local-dir/nmPrivate/container_1495017112106_0012_01_000001.tokens 2017-05-17 19:07:12,790 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user brody 2017-05-17 19:07:12,793 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-brody/nm-local-dir/nmPrivate/container_1495017112106_0012_01_000001.tokens to /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012/container_1495017112106_0012_01_000001.tokens 2017-05-17 19:07:12,794 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012 = file:/tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012 2017-05-17 19:07:12,843 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded. 2017-05-17 19:07:13,178 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1495017112106_0012_01_000001 transitioned from LOCALIZING to SCHEDULED 2017-05-17 19:07:13,178 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler: Starting container [container_1495017112106_0012_01_000001] 2017-05-17 19:07:13,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1495017112106_0012_01_000001 transitioned from SCHEDULED to RUNNING 2017-05-17 19:07:13,352 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1495017112106_0012_01_000001 2017-05-17 19:07:13,359 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012/container_1495017112106_0012_01_000001/default_container_executor.sh] 2017-05-17 19:07:13,686 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1495017112106_0012_01_000001 is : 1 2017-05-17 19:07:13,686 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1495017112106_0012_01_000001 and exit code: 1 ExitCodeException exitCode=1: 2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_1495017112106_0012_02_000001 from application application_1495017112106_0012 2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1495017112106_0012_02_000001 2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1495017112106_0012 2017-05-17 19:07:16,405 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1495017112106_0012_02_000001] 2017-05-17 19:07:16,408 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1495017112106_0012 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1495017112106_0012 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1495017112106_0012 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1495017112106_0012, with delay of 10800 seconds 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012

诡异的Hadoop启动脚本

**跟踪start-dfs.sh脚本启动时,发现极其诡异的一点,如下:** --- Hadoop版本: Apache Hadoop2.6.5 --- **1、start-dfs.sh** start-dfs.sh中有这样语句: ``` "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \ --config "$HADOOP_CONF_DIR" \ --hostnames "$NAMENODES" \ --script "$bin/hdfs" start namenode $nameStartOpt ``` 对应执行命令为 ``` hadoop-daemons.sh --config "$HADOOP_CONF_DIR" --hostnames "$NAMENODES" --script "$bin/hdfs" start namenode $nameStartOpt ``` 跟踪到hadoop-daemons.sh --- **2、hadoop-daemons.sh** ``` exec "$bin/slaves.sh" --config $HADOOP_CONF_DIR cd "$HADOOP_PREFIX" \; "$bin/hadoop-daemon.sh" --config $HADOOP_CONF_DIR "$@" ``` 接着执行到这里,最后执行 ``` "$bin/hadoop-daemon.sh" --config $HADOOP_CONF_DIR "$@" ``` 这里的"$@" 为我理解为从[start-dfs.sh]中传过来的 ``` --config "$HADOOP_CONF_DIR" --hostnames "$NAMENODES" --script "$bin/hdfs" start namenode $nameStartOpt ``` 这么一大串参数。 接着跟踪,执行到hadoop-daemon.sh脚本 --- **3、hadoop-daemon.sh** ``` hadoopScript="$HADOOP_PREFIX"/bin/hadoop if [ "--script" = "$1" ] then shift hadoopScript=$1 shift fi startStop=$1 shift command=$1 shift ``` 前后都没有进行过 shift 操作。 这里比较 ``` $1="--script" ``` 是啥意思? 从hadoop-daemons.sh传入到此脚本的参数难道不是应该为 ``` --config "$HADOOP_CONF_DIR" --config "$HADOOP_CONF_DIR" --hostnames "$NAMENODES" --script "$bin/hdfs" start namenode $nameStartOpt ``` 这几个参数吗? 为什么上面的处理过程好像传入的只有 ``` --script "$bin/hdfs" start namenode $nameStartOpt ``` 这个参数? 实在是搞不懂。

windows平台安装Hadoop,启动报错No such file or directory

这几天在折腾windows下安装Hadoop,完全按照网上写的标准步骤。 参考博文:http://www.cnblogs.com/kinglau/p/3270160.html 好不容易到最后了,在启动Hadoop时,一直报错如标题。 格式化hdfs日志: $ bin/hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/07/13 23:07:53 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = 58-PC/192.168.0.102 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.0 STARTUP_MSG: classpath = D:\tools\cygwin32\home\lenovo\hadoop\etc\hadoop;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\activation-1.1.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\asm-3.2.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\avro-1.7.4.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-cli-1.2.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-codec-1.4.jar;D:\tools\cygwin32\home\lenovo\had 。。。。。。。。。。。。。。。。 STARTUP_MSG: java = 1.8.0_31 ************************************************************/ 15/07/13 23:07:53 INFO namenode.NameNode: createNameNode [-format] 15/07/13 23:07:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-052de37d-497f-4dd3-80bc-6c6c8a26d5d0 15/07/13 23:07:55 INFO namenode.FSNamesystem: No KeyProvider found. 15/07/13 23:07:55 INFO namenode.FSNamesystem: fsLock is fair:true 15/07/13 23:07:56 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 15/07/13 23:07:56 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 15/07/13 23:07:56 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 15/07/13 23:07:56 INFO blockmanagement.BlockManager: The block deletion will start around 2015 ▒▒▒▒ 13 23:07:56 15/07/13 23:07:56 INFO util.GSet: Computing capacity for map BlocksMap 15/07/13 23:07:56 INFO util.GSet: VM type = 32-bit 15/07/13 23:07:56 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 15/07/13 23:07:56 INFO util.GSet: capacity = 2^22 = 4194304 entries 15/07/13 23:07:56 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 15/07/13 23:07:56 INFO blockmanagement.BlockManager: defaultReplication = 1 15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxReplication = 512 15/07/13 23:07:56 INFO blockmanagement.BlockManager: minReplication = 1 15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 15/07/13 23:07:56 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 15/07/13 23:07:56 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 15/07/13 23:07:56 INFO blockmanagement.BlockManager: encryptDataTransfer = false 15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 15/07/13 23:07:56 INFO namenode.FSNamesystem: fsOwner = lenovo (auth:SIMPLE) 15/07/13 23:07:56 INFO namenode.FSNamesystem: supergroup = supergroup 15/07/13 23:07:56 INFO namenode.FSNamesystem: isPermissionEnabled = true 15/07/13 23:07:56 INFO namenode.FSNamesystem: HA Enabled: false 15/07/13 23:07:56 INFO namenode.FSNamesystem: Append Enabled: true 15/07/13 23:07:56 INFO util.GSet: Computing capacity for map INodeMap

Hadoop伪分布模式配置完成开启进程的时候显示JAVA_HOME is not set

![图片说明](https://img-ask.csdn.net/upload/201708/14/1502641191_462166.png) 我的hadoop-evn.sh文件夹打开就是这样,最新版的Hadoop,我在=后面直接加上了 我的JAVA_PATH,可是保存之后运行start-all.sh,还是不行,请大神指教该如何解决! 还有我用的是PuTTY,运行start-all.sh时还有报错,在centos的终端上输入start-all.sh 直接没有这个命令????

Hadoop2.4.0环境下HBase-0.9.60-hadoo2版本冲突问题

我的Hadoop环境是Hadoop2.4.0,HBase是HBase-0.9.60-hadoo2,今天使用HBase API编写了一个程序,运行的时候曝下面的错误: 2014-09-01 18:16:00,247 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-09-01 18:16:00,283 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(303)) - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76) at org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1514) at org.apache.hadoop.hbase.zookeeper.ZKConfig.makeZKProps(ZKConfig.java:113) at org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig.java:265) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:134) at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1710) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:806) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247) at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:183) at cn.haha.HBase.HBaseApp1.main(HBaseApp1.java:26) 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:host.name=Admin-PC 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.version=1.7.0_65 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.vendor=Oracle Corporation 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.home=C:\workDir\jdk7u65\jre 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.class.path=E:\workDir\workspace_eclipse\HBase-0.96\bin;E:\workDir\workspace_eclipse\HBase-0.96\lib\activation-1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\aopalliance-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\asm-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\avro-1.7.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-1.7.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-core-1.8.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-cli-1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-codec-1.7.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-collections-3.2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-compress-1.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-configuration-1.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-daemon-1.0.13.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-digester-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-el-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-httpclient-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-io-2.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-lang-2.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-logging-1.1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-math-2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-net-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\findbugs-annotations-1.3.9-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\gmbal-api-only-3.0.0-b023.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-framework-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-server-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-servlet-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-rcm-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guava-12.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-servlet-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-annotations-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-auth-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-app-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-core-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-shuffle-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-api-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-nodemanager-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hamcrest-core-1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-client-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-examples-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop2-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-prefix-tree-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-protocol-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-shell-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-testing-util-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-thrift-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\htrace-core-2.04.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpclient-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpcore-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-core-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-jaxrs-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-mapper-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-xc-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jamon-runtime-2.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-compiler-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-runtime-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.inject-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-api-3.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-api-2.2.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-impl-2.2.3-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-client-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-core-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-guice-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-json-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-server-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-core-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jets3t-0.6.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jettison-1.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-sslengine-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-util-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jruby-complete-1.6.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsch-0.1.42.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-api-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsr305-1.3.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\junit-4.11.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\libthrift-0.9.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\log4j-1.2.17.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\management-api-3.0.0-b012.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\metrics-core-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\netty-3.6.6.Final.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\paranamer-2.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\protobuf-java-2.5.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\servlet-api-2.5-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-api-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-log4j12-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\snappy-java-1.0.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xmlenc-0.52.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xz-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\zookeeper-3.4.5.jar 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.library.path=C:\workDir\jdk7u65\bin;C:\windows\Sun\Java\bin;C:\windows\system32;C:\windows;C:/workDir/jdk7u65/bin/../jre/bin/client;C:/workDir/jdk7u65/bin/../jre/bin;C:/workDir/jdk7u65/bin/../jre/lib/i386;C:\Program Files (x86)\Common Files\NetSarang;C:\workDir\jdk7u65\bin;E:\workDir\apache-tomcat-7.0.55;E:\workDir\apache-tomcat-7.0.55;%CATALINA_HOME%\common\lib\common\lib\bin;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\Lenovo\Fingerprint Manager Pro\;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x86;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x64;C:\Program Files (x86)\IDM Computer Solutions\UltraEdit\;E:\workDir\eclipse-indigo-3.7.2;;. 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\ 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA> 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Windows 7 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=x86 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=6.1 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=C:\Users\Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=E:\workDir\workspace_eclipse\HBase-0.96 2014-09-01 18:16:00,299 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,326 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,328 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,329 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,335 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001c, negotiated timeout = 40000 2014-09-01 18:16:00,472 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,473 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,478 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001d, negotiated timeout = 40000 2014-09-01 18:16:00,499 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2014-09-01 18:16:00,817 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001d closed 2014-09-01 18:16:00,817 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down 2014-09-01 18:16:01,288 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:01,290 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:01,290 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:01,291 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:01,294 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001e, negotiated timeout = 40000 2014-09-01 18:16:01,304 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001e closed 2014-09-01 18:16:01,304 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down

hbase启动报错问题,求解

hbase集群启动通过ssh启动会爆下面这个错 java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2706) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2721) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2704) ... 5 more Caused by: java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1003) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:579) 通过自定义脚本或者 bin/start-hbase.sh 启动会报错,但通过$ bin/hbase-daemon.sh start master $ bin/hbase-daemon.sh start regionserver 这命令单独在各个机子上运行则不报,求解!!!!

程序员的兼职技能课

程序员的兼职技能课

为linux系统设计一个简单的二级文件系统

实验目的: 通过一个简单多用户文件系统的设计,加深理解文件系统的内部功能及内部实现。 实验要求: 为linux系统设计一个简单的二级文件系统。要求做到以下几点: (1)可以实现下列几条命令(至少4条)

CCNA+HCNA+wireshark抓包综合网工技能提升套餐

本套餐包含思科路由交换CCNA,部分CCNP核心,华为HCNA以及wireshark抓包等类容,旨在培养具有综合能力的网络工程师。

董付国老师Python全栈学习优惠套餐

购买套餐的朋友可以关注微信公众号“Python小屋”,上传付款截图,然后领取董老师任意图书1本。

成年人用得到的6款资源网站!各个都是宝藏,绝对让你大饱眼福!

不管是学习还是工作,我们都需要一些资源帮助我们有效地解决实际问题。 很多人找资源只知道上百度,但是你们知道吗,有的资源是百度也搜索不出来的,那么今天小编就给大家介绍几款好用的资源网站,大家赶紧收入囊中吧! 1.网盘007 https://wangpan007.com/ 一款全能的资源搜索网站!只需要输入关键字,就能获得你想要的视频、音乐、压缩包等各种资源,网上...

矿车轴载荷计算方法的比较及选用

针对矿车轴的弯曲损坏,分析了固定式矿车车轴的受力,并对力叠加法以及当量负荷法2种计算方法进行了分析和比较,认为应采用当量负荷法进行车轴的设计计算。

Python数据清洗实战入门

Python数据清洗实战入门

C/C++跨平台研发从基础到高阶实战系列套餐

一 专题从基础的C语言核心到c++ 和stl完成基础强化; 二 再到数据结构,设计模式完成专业计算机技能强化; 三 通过跨平台网络编程,linux编程,qt界面编程,mfc编程,windows编程,c++与lua联合编程来完成应用强化 四 最后通过基于ffmpeg的音视频播放器,直播推流,屏幕录像,

Polar编码matlab程序

matlab实现的Polar codes源程序

2019全国大学生数学建模竞赛C题原版优秀论文

2019全国大学生数学建模竞赛C题原版优秀论文,PDF原版论文,不是图片合成的,是可编辑的文字版。共三篇。 C044.pdf C137.pdf C308.pdf

Linux常用命令大全(非常全!!!)

Linux常用命令大全(非常全!!!) 最近都在和Linux打交道,感觉还不错。我觉得Linux相比windows比较麻烦的就是很多东西都要用命令来控制,当然,这也是很多人喜欢linux的原因,比较短小但却功能强大。我将我了解到的命令列举一下,仅供大家参考: 系统信息 arch 显示机器的处理器架构 uname -m 显示机器的处理器架构 uname -r 显示正在使用的内核版本 d...

Linux下聊天室实现(基于C)

在linux下的基于TCP/IP,采用socket通信的聊天室,实现进入聊天室,进行多人群聊,指定人进行私聊,群主管理员功能,颗进行禁言,提出群聊等操作。个人账号可修改昵称或者修改密码,还可进行找回密

一个较完整的Qt用户登录界面设计

一个较完整的Qt用户登录界面,稍微移植可用,用sqlite数据库存储用户名和密码,具有增加和删除用户的功能,开发环境为ubuntu16.04+Qt5.6.1,win7下程序也编译可用。贡献出来,共同学

机器学习初学者必会的案例精讲

机器学习初学者必会的案例精讲

【C语言】贪吃蛇游戏代码(Visual C++6.0实现)

本游戏代码参考《C语言项目开发全程实录(第二版)》第六章。代码已在Visual C++6.0环境下编译测试通过,可在VC++6.0编译器中导入工程编译运行查看效果,或者也可以直接运行Release或D

Android小项目——新闻APP(源码)

Android小项目——新闻APP(源码),一个很简单的可以练手的Android Demo Ps:下载之前可以先看一下这篇文章——https://blog.csdn.net/qq_34149526/a

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

Android音视频开发全套

Android平台音视频开发全套,涉及:FFmpeg软解码解码、Mediacodec硬解码编码、Openssl音频播放、OpenGL ES视频渲染、RTMP推流等核心重要知识点。

YOLOv3目标检测实战:训练自己的数据集

YOLOv3目标检测实战:训练自己的数据集

2019 Python开发者日-培训

2019 Python开发者日-培训

2019 AI开发者大会

2019 AI开发者大会

Windows版YOLOv4目标检测实战:训练自己的数据集

Windows版YOLOv4目标检测实战:训练自己的数据集

4小时玩转微信小程序——基础入门与微信支付实战

4小时玩转微信小程序——基础入门与微信支付实战

因为看了这些书,我大二就拿了华为Offer

四年了,四年,你知道大学这四年我怎么过的么?

Python可以这样学(第四季:数据分析与科学计算可视化)

Python可以这样学(第四季:数据分析与科学计算可视化)

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

YOLOv3目标检测实战系列课程

《YOLOv3目标检测实战系列课程》旨在帮助大家掌握YOLOv3目标检测的训练、原理、源码与网络模型改进方法。 本课程的YOLOv3使用原作darknet(c语言编写),在Ubuntu系统上做项目演示。 本系列课程包括三门课: (1)《YOLOv3目标检测实战:训练自己的数据集》 包括:安装darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 (2)《YOLOv3目标检测:原理与源码解析》讲解YOLOv1、YOLOv2、YOLOv3的原理、程序流程并解析各层的源码。 (3)《YOLOv3目标检测:网络模型改进方法》讲解YOLOv3的改进方法,包括改进1:不显示指定类别目标的方法 (增加功能) ;改进2:合并BN层到卷积层 (加快推理速度) ; 改进3:使用GIoU指标和损失函数 (提高检测精度) ;改进4:tiny YOLOv3 (简化网络模型)并介绍 AlexeyAB/darknet项目。

DirectX修复工具V4.0增强版

DirectX修复工具(DirectX Repair)是一款系统级工具软件,简便易用。本程序为绿色版,无需安装,可直接运行。 本程序的主要功能是检测当前系统的DirectX状态,如果发现异常则进行修复

C++语言基础视频教程

C++语言基础视频教程

相关热词 c# 开发接口 c# 中方法上面的限制 c# java 时间戳 c#单元测试入门 c# 数组转化成文本 c#实体类主外键关系设置 c# 子函数 局部 c#窗口位置设置 c# list 查询 c# 事件 执行顺序
立即提问