[Hadoop3.0.0]Could not find YarnChild 20C

Could not find or load main class org.apache.hadoop.mapred.YarnChild

在使用hadoop3.0.0中,所有东西都配完,也启动起来了。
但运行hadoop-mapreduce-examples-3.0.0.jar的时候开始报错(运行自己的程序也是)
报错信息

下面是我的配置信息

<!-- core-site.xml --> 
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
</property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>/tmp/hadoopdata</value>
</property>

<!-- hdfs-site.xml -->
<property>
        <name>dfs.replication</name>
        <value>3</value>
</property>

<!-- yarn-site.xml  -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master</value>
</property>

<!-- mapred-site.xml -->
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>
<property>
        <name>yarn.app.mapreduce.am.env</name>
        <value>HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME</value>
</property>

export HADOOP_HOME=/data/hadoop-3.0.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

没有配置HADOOP_CLASSPATH。
百度、Google、stackover挺久都找不到答案,希望大神们能解答~

baidu_26773887
baidu_26773887 请问这个问题后来解决了么
7 个月之前 回复

2个回答

Yeauty
Yeauty 不是同一个错误,时间同步了,时区也一样
接近 2 年之前 回复

yarn没启动?启动了没注册?启动一半报错挂掉了?

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hadoop 2.2运行wordcount报错
hadoop 2.2 + jdk1.7 运行wordcount例子 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /word /ws 报错: org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1449733659077_0001_m_000000_0: Error: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.InputSplit 请各位高手指点
hadoop3.0.0不会生成_success文件吗
如题。hadoop3.0.0不会生成_success文件吗?????????????????
nutch2.3+hadoop2.4兼容问题
masterbak:9000/user/url/urls.txt:0+22 2015-02-05 01:14:43,418 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected at org.apache.gora.mapreduce.GoraOutputFormat.getRecordWriter(GoraOutputFormat.java:83) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
执行jar报错 Hadoop java.io.IOException
[img=http://img.bbs.csdn.net/upload/201703/15/1489518401_142809.png][/img] Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text hadoop jar Hadoop_Demo1.jar /user/myData/ /user/out/ 执行简单jar包 17/03/15 02:52:37 INFO client.RMProxy: Connecting to ResourceManager at s0/192.168.253.130:8032 17/03/15 02:52:37 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 17/03/15 02:52:38 INFO input.FileInputFormat: Total input paths to process : 2 17/03/15 02:52:38 INFO mapreduce.JobSubmitter: number of splits:2 17/03/15 02:52:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489512856623_0004 17/03/15 02:52:39 INFO impl.YarnClientImpl: Submitted application application_1489512856623_0004 17/03/15 02:52:39 INFO mapreduce.Job: The url to track the job: http://s0:8088/proxy/application_1489512856623_0004/ 17/03/15 02:52:39 INFO mapreduce.Job: Running job: job_1489512856623_0004 17/03/15 02:52:50 INFO mapreduce.Job: Job job_1489512856623_0004 running in uber mode : false 17/03/15 02:52:50 INFO mapreduce.Job: map 0% reduce 0% 17/03/15 02:55:18 INFO mapreduce.Job: map 50% reduce 0% 17/03/15 02:55:18 INFO mapreduce.Job: Task Id : attempt_1489512856623_0004_m_000001_0, Status : FAILED Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414) at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.ClassCastException: interface javax.xml.soap.Text at java.lang.Class.asSubclass(Class.java:3404) at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:887) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1004) at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402) ... 9 more Container killed by the ApplicationMaster. 17/03/15 02:55:18 INFO mapreduce.Job: Task Id : attempt_1489512856623_0004_m_000000_0, Status : FAILED Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414) at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.ClassCastException: interface javax.xml.soap.Text at java.lang.Class.asSubclass(Class.java:3404) at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:887) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1004) at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402) ... 9 more 17/03/15 02:55:19 INFO mapreduce.Job: map 0% reduce 0% 17/03/15 02:55:31 INFO mapreduce.Job: Task Id : attempt_1489512856623_0004_m_000000_1, Status : FAILED Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :interface javax.xml.soap.Text at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414) at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Hadoop2.2.0搭建过程中namenode初始化报错
HDFS初始化namenode报错,求大神帮帮忙!!! FATAL namenode.NameNode: Exception in namenode join java.lang.ClassCastException: com.sun.org.apache.xerces.internal.dom.DeferredElementNSImpl cannot be cast to org.w3c.dom.Text at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2111) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2001) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1918) at org.apache.hadoop.conf.Configuration.get(Configuration.java:721) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:740) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:965) at org.apache.hadoop.security.Groups.<init>(Groups.java:62) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214) at org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:275) at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:269) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 15/04/13 04:15:01 INFO util.ExitUtil: Exiting with status 1 15/04/13 04:15:02 INFO namenode.NameNode: SHUTDOWN_MSG:
hadoop3.0 运行yarn jar 3.0.0-alpha2.jar pi 10 100
2017-05-17 19:07:12,789 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-brody/nm-local-dir/nmPrivate/container_1495017112106_0012_01_000001.tokens 2017-05-17 19:07:12,790 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user brody 2017-05-17 19:07:12,793 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-brody/nm-local-dir/nmPrivate/container_1495017112106_0012_01_000001.tokens to /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012/container_1495017112106_0012_01_000001.tokens 2017-05-17 19:07:12,794 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012 = file:/tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012 2017-05-17 19:07:12,843 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded. 2017-05-17 19:07:13,178 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1495017112106_0012_01_000001 transitioned from LOCALIZING to SCHEDULED 2017-05-17 19:07:13,178 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler: Starting container [container_1495017112106_0012_01_000001] 2017-05-17 19:07:13,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1495017112106_0012_01_000001 transitioned from SCHEDULED to RUNNING 2017-05-17 19:07:13,352 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1495017112106_0012_01_000001 2017-05-17 19:07:13,359 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [nice, -n, 0, bash, /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012/container_1495017112106_0012_01_000001/default_container_executor.sh] 2017-05-17 19:07:13,686 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1495017112106_0012_01_000001 is : 1 2017-05-17 19:07:13,686 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception from container-launch with container ID: container_1495017112106_0012_01_000001 and exit code: 1 ExitCodeException exitCode=1: 2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_1495017112106_0012_02_000001 from application application_1495017112106_0012 2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1495017112106_0012_02_000001 2017-05-17 19:07:15,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1495017112106_0012 2017-05-17 19:07:16,405 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1495017112106_0012_02_000001] 2017-05-17 19:07:16,408 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1495017112106_0012 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1495017112106_0012 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1495017112106_0012 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1495017112106_0012, with delay of 10800 seconds 2017-05-17 19:07:16,409 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-brody/nm-local-dir/usercache/brody/appcache/application_1495017112106_0012
Hadoop2.4.0环境下HBase-0.9.60-hadoo2版本冲突问题
我的Hadoop环境是Hadoop2.4.0,HBase是HBase-0.9.60-hadoo2,今天使用HBase API编写了一个程序,运行的时候曝下面的错误: 2014-09-01 18:16:00,247 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-09-01 18:16:00,283 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(303)) - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76) at org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1514) at org.apache.hadoop.hbase.zookeeper.ZKConfig.makeZKProps(ZKConfig.java:113) at org.apache.hadoop.hbase.zookeeper.ZKConfig.getZKQuorumServersString(ZKConfig.java:265) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:134) at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1710) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:806) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247) at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:183) at cn.haha.HBase.HBaseApp1.main(HBaseApp1.java:26) 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:host.name=Admin-PC 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.version=1.7.0_65 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.vendor=Oracle Corporation 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.home=C:\workDir\jdk7u65\jre 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.class.path=E:\workDir\workspace_eclipse\HBase-0.96\bin;E:\workDir\workspace_eclipse\HBase-0.96\lib\activation-1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\aopalliance-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\asm-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\avro-1.7.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-1.7.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-beanutils-core-1.8.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-cli-1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-codec-1.7.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-collections-3.2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-compress-1.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-configuration-1.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-daemon-1.0.13.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-digester-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-el-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-httpclient-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-io-2.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-lang-2.6.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-logging-1.1.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-math-2.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\commons-net-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\findbugs-annotations-1.3.9-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\gmbal-api-only-3.0.0-b023.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-framework-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-server-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-http-servlet-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\grizzly-rcm-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guava-12.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\guice-servlet-3.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-annotations-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-auth-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-hdfs-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-app-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-core-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-jobclient-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-mapreduce-client-shuffle-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-api-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-client-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-common-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hadoop-yarn-server-nodemanager-2.2.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hamcrest-core-1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-client-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-common-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-examples-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-hadoop2-compat-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-it-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-prefix-tree-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-protocol-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2-tests.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-server-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-shell-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-testing-util-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\hbase-thrift-0.96.2-hadoop2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\htrace-core-2.04.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpclient-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\httpcore-4.1.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-core-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-jaxrs-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-mapper-asl-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jackson-xc-1.8.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jamon-runtime-2.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-compiler-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jasper-runtime-5.5.23.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.inject-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\javax.servlet-api-3.0.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-api-2.2.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jaxb-impl-2.2.3-1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-client-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-core-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-guice-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-json-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-server-1.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-core-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jersey-test-framework-grizzly2-1.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jets3t-0.6.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jettison-1.3.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-sslengine-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jetty-util-6.1.26.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jruby-complete-1.6.8.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsch-0.1.42.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsp-api-2.1-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\jsr305-1.3.9.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\junit-4.11.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\libthrift-0.9.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\log4j-1.2.17.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\management-api-3.0.0-b012.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\metrics-core-2.1.2.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\netty-3.6.6.Final.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\paranamer-2.3.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\protobuf-java-2.5.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\servlet-api-2.5-6.1.14.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-api-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\slf4j-log4j12-1.6.4.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\snappy-java-1.0.4.1.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xmlenc-0.52.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\xz-1.0.jar;E:\workDir\workspace_eclipse\HBase-0.96\lib\zookeeper-3.4.5.jar 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.library.path=C:\workDir\jdk7u65\bin;C:\windows\Sun\Java\bin;C:\windows\system32;C:\windows;C:/workDir/jdk7u65/bin/../jre/bin/client;C:/workDir/jdk7u65/bin/../jre/bin;C:/workDir/jdk7u65/bin/../jre/lib/i386;C:\Program Files (x86)\Common Files\NetSarang;C:\workDir\jdk7u65\bin;E:\workDir\apache-tomcat-7.0.55;E:\workDir\apache-tomcat-7.0.55;%CATALINA_HOME%\common\lib\common\lib\bin;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\Lenovo\Fingerprint Manager Pro\;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x86;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x64;C:\Program Files (x86)\IDM Computer Solutions\UltraEdit\;E:\workDir\eclipse-indigo-3.7.2;;. 2014-09-01 18:16:00,297 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\ 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA> 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Windows 7 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=x86 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=6.1 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=C:\Users\Administrator 2014-09-01 18:16:00,298 INFO [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=E:\workDir\workspace_eclipse\HBase-0.96 2014-09-01 18:16:00,299 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,326 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,328 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,329 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,335 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001c, negotiated timeout = 40000 2014-09-01 18:16:00,472 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:00,473 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:00,474 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:00,478 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001d, negotiated timeout = 40000 2014-09-01 18:16:00,499 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2014-09-01 18:16:00,817 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001d closed 2014-09-01 18:16:00,817 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down 2014-09-01 18:16:01,288 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xde1f90, quorum=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181, baseZNode=/hbase 2014-09-01 18:16:01,290 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xde1f90 connecting to ZooKeeper ensemble=hadoop2.slave01:2181,hadoop2.master:2181,hadoop2.slave02:2181 2014-09-01 18:16:01,290 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server hadoop2.slave01/192.168.100.51:2181. Will not attempt to authenticate using SASL (unknown error) 2014-09-01 18:16:01,291 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to hadoop2.slave01/192.168.100.51:2181, initiating session 2014-09-01 18:16:01,294 INFO [main-SendThread(hadoop2.slave01:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server hadoop2.slave01/192.168.100.51:2181, sessionid = 0x1482f4c45e2001e, negotiated timeout = 40000 2014-09-01 18:16:01,304 INFO [main] zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1482f4c45e2001e closed 2014-09-01 18:16:01,304 INFO [main-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
maven3.3.9编译hadoop2.6.5报错 帮忙解决问题
[INFO] Building Apache Hadoop Main 2.6.5 [INFO] ------------------------------------------------------------------------ Downloading: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Could not validate integrity of download from http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml: Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml Downloaded: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml (99 KB at 11.8 KB/sec) [WARNING] The metadata /root/.m2/repository/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata-ibiblio.org.xml is invalid: end tag name </body> must match start tag name <hr> from line 888 (position: START_TAG seen ... 08-Nov-2014 19:04 207\r\n</pre><hr></body>... @888:18) [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................. FAILURE [ 8.416 s] [INFO] Apache Hadoop Build Tools .......................... SKIPPED [INFO] Apache Hadoop Project POM .......................... SKIPPED [INFO] Apache Hadoop Annotations .......................... SKIPPED [INFO] Apache Hadoop Assemblies ........................... SKIPPED [INFO] Apache Hadoop Project Dist POM ..................... SKIPPED [INFO] Apache Hadoop Maven Plugins ........................ SKIPPED [INFO] Apache Hadoop MiniKDC .............................. SKIPPED [INFO] Apache Hadoop Auth ................................. SKIPPED [INFO] Apache Hadoop Auth Examples ........................ SKIPPED [INFO] Apache Hadoop Common ............................... SKIPPED [INFO] Apache Hadoop NFS .................................. SKIPPED [INFO] Apache Hadoop KMS .................................. SKIPPED [INFO] Apache Hadoop Common Project ....................... SKIPPED [INFO] Apache Hadoop HDFS ................................. SKIPPED [INFO] Apache Hadoop HttpFS ............................... SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED [INFO] Apache Hadoop HDFS Project ......................... SKIPPED [INFO] hadoop-yarn ........................................ SKIPPED [INFO] hadoop-yarn-api .................................... SKIPPED [INFO] hadoop-yarn-common ................................. SKIPPED [INFO] hadoop-yarn-server ................................. SKIPPED [INFO] hadoop-yarn-server-common .......................... SKIPPED [INFO] hadoop-yarn-server-nodemanager ..................... SKIPPED [INFO] hadoop-yarn-server-web-proxy ....................... SKIPPED [INFO] hadoop-yarn-server-applicationhistoryservice ....... SKIPPED [INFO] hadoop-yarn-server-resourcemanager ................. SKIPPED [INFO] hadoop-yarn-server-tests ........................... SKIPPED [INFO] hadoop-yarn-client ................................. SKIPPED [INFO] hadoop-yarn-applications ........................... SKIPPED [INFO] hadoop-yarn-applications-distributedshell .......... SKIPPED [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SKIPPED [INFO] hadoop-yarn-site ................................... SKIPPED [INFO] hadoop-yarn-registry ............................... SKIPPED [INFO] hadoop-yarn-project ................................ SKIPPED [INFO] hadoop-mapreduce-client ............................ SKIPPED [INFO] hadoop-mapreduce-client-core ....................... SKIPPED [INFO] hadoop-mapreduce-client-common ..................... SKIPPED [INFO] hadoop-mapreduce-client-shuffle .................... SKIPPED [INFO] hadoop-mapreduce-client-app ........................ SKIPPED [INFO] hadoop-mapreduce-client-hs ......................... SKIPPED [INFO] hadoop-mapreduce-client-jobclient .................. SKIPPED [INFO] hadoop-mapreduce-client-hs-plugins ................. SKIPPED [INFO] Apache Hadoop MapReduce Examples ................... SKIPPED [INFO] hadoop-mapreduce ................................... SKIPPED [INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED [INFO] Apache Hadoop Distributed Copy ..................... SKIPPED [INFO] Apache Hadoop Archives ............................. SKIPPED [INFO] Apache Hadoop Rumen ................................ SKIPPED [INFO] Apache Hadoop Gridmix .............................. SKIPPED [INFO] Apache Hadoop Data Join ............................ SKIPPED [INFO] Apache Hadoop Ant Tasks ............................ SKIPPED [INFO] Apache Hadoop Extras ............................... SKIPPED [INFO] Apache Hadoop Pipes ................................ SKIPPED [INFO] Apache Hadoop OpenStack support .................... SKIPPED [INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED [INFO] Apache Hadoop Client ............................... SKIPPED [INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED [INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED [INFO] Apache Hadoop Tools Dist ........................... SKIPPED [INFO] Apache Hadoop Tools ................................ SKIPPED [INFO] Apache Hadoop Distribution ......................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 06:03 min [INFO] Finished at: 2018-06-23T11:25:17+08:00 [INFO] Final Memory: 27M/69M [INFO] ------------------------------------------------------------------------ [ERROR] Error resolving version for plugin 'org.apache.maven.plugins:maven-javadoc-plugin' from the repositories [local (/root/.m2/repository), ibiblio.org (http://mirrors.ibiblio.org/pub/mirrors/maven2)]: Plugin not found in any plugin repository -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginVersionResolutionException You have new mail in /var/spool/mail/root
hadoop 2.6 namenode创建失败
(前面都正常) 2016-03-23 08:30:10,036 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 2016-03-23 08:30:10,040 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-03-23 08:30:10,140 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-03-23 08:30:10,141 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-03-23 08:30:10,141 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 2016-03-23 08:30:10,142 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-03-23 08:30:10,144 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/
求救啊!Hadoop 2.2.0 搭建集群 启动hdfs时候 namenode 启动后报空指针
日志如下: 2015-02-07 01:01:46,610 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring NN shutdown. Shutting down immediately. java.lang.NullPointerException at org.apache.hadoop.hdfs.DFSUtil.substituteForWildcardAddress(DFSUtil.java:942) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.getHttpAddress(StandbyCheckpointer.java:108) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.setNameNodeAddresses(StandbyCheckpointer.java:90) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.<init>(StandbyCheckpointer.java:76) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startStandbyServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startStandbyServices(NameNode.java:1456) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.enterState(StandbyState.java:58) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:686) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 2015-02-07 01:01:46,614 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-02-07 01:01:46,620 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 我就不明白了,为什么就一直报空指针,而且,远程调试的时候就不会报错,已经凌乱了。
hive0.9.0+hbase0.96.2+hadoop2.2.0整合执行查询hql报错如下
hive> select * from hbasehive_table; OK Exception in thread "main" java.lang.InstantiationError: org.apache.hadoop.mapreduce.JobContext at org.apache.hadoop.hive.shims.Hadoop20SShims.newJobContext(Hadoop20SShims.java:58) at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:473) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:281) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:320) at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:154) at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1377) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
hadoop2.2.0集群rm配置了HA,但nodemanager无法与resourcemanager通信
yarn-site.xml: <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>11.24.88.242</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>11.24.88.244</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>11.20.26.6:2181,11.20.26.2:2181,11.20.26.3:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> 在yarn.site.xml中MR配置了HA,但一直报错,datanode一直与 0.0.0.0:8031通信,却不与MRtong'x: 2019-08-13 13:33:26,799 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From hadoop7/11.20.200.197 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:181) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:199) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:339) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:386) Caused by: java.net.ConnectException: Call From hadoop7/11.20.200.197 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1351) at org.apache.hadoop.ipc.Client.call(Client.java:1300) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy23.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at $Proxy24.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:238) at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:175) ... 6 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642) at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399) at org.apache.hadoop.ipc.Client.call(Client.java:1318)
hadoop2.5.0-cdh5.3.1, 如何选择相配的spark?
我装hadoop2.5.0-cdh5.3.1 + spark1.2.1-bin-hadoop2.4.tgz,发现许多问题。是不是版本不兼容。 请朋友们帮助! 另外,如何编译 jar? 因为未发现build or sbt path.
企业中现在在生产环境中用hadoop3.x 版本的多吗?都有那些公司已经开始使用了
例如官方发布的目前可用的稳定版本: hadoop3.0.3+ hadoop-3.1.1+ 都有哪些公司已经开始生产环境中使用
Hadoop3.1.0分布式环境搭建
1,环境VMWare,CentOS6.5, JDK1.8(oracle), Hadoop3.1.0 在master节点使用start-dfs.sh时,只会启动master节点的namenode和datanode,以及slave1节点的secondarynamenode,使用start-dfs.sh时能启动master的namenode和datanode,以及slave1上的secondarynamenode. 其它所有子节点的datanode进程都不会启动,必须在子节点上使用 hdfs --daemon start datanode 命令手动启动datanode. master节点namenode和datanode的日志中未出现任何异常情况。 [root@master hadoop-3.1.0]# start-dfs.sh Starting namenodes on [master] Starting datanodes Starting secondary namenodes [node1] 2019-07-14 11:04:46,521 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@master hadoop-3.1.0]# jps 29025 NameNode 29147 DataNode 29420 Jps 问题: 为什么无法通过start-dfs.sh启动集群中其它slave节点中的datanode,而又能启动slave中的secondarynamenode? 请各位大神帮忙看看。 ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563100513_35604.png)
hadoop 2.6.0升级到2.6.3
已经根据官网上的方法进行了操作。 1、下载了hadoop2.6.3 2、rollingUpgrade prepare 3、停止standby的namenode,hadoop2.6.3环境中执行rollingUpgrade started 4、切换active和standy的namenode,新的standby的namenode重复3 5、更新datanode 6、finalized操作 在执行3之后会有很多INFO级别的信息打印,是否不需要理会?而且以此方式启动的namenode没有pid(hadoop/pids文件夹下没有namenode)是不是有问题?且最后更新完datanode执行finalized操作提示没有progress rollingUpgrade。 有朋友升级过么?我的升级操作哪里有问题?
hadoop2.5.2无法执行wordcount及-put操作
hadoop2.5.2 一个master,两个slave,名字分别为slave1和slave2,master启动后如下: 30784 NameNode 31394 Jps 30972 SecondaryNameNode 31132 ResourceManager slave1和slave2启动后都为如下 8064 Jps 7943 NodeManager 7834 DataNode 感觉没什么异常,然后我在master上执行 hadoop fs -put README.txt /input 一直不动,最后报错 17/03/09 19:59:11 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 17/03/09 19:59:11 INFO hdfs.DFSClient: Abandoning BP-247473795-10.202.15.17-1489054138763:blk_1073741827_1003 17/03/09 19:59:11 INFO hdfs.DFSClient: Excluding datanode 10.202.15.175:50010 17/03/09 20:01:18 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 17/03/09 20:01:18 INFO hdfs.DFSClient: Abandoning BP-247473795-10.202.15.17-1489054138763:blk_1073741828_1004 17/03/09 20:01:18 INFO hdfs.DFSClient: Excluding datanode 10.202.15.174:50010 17/03/09 20:01:18 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2791) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:606) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:455) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy9.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy9.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1449) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1270) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) put: File /input/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. 所有的机器都已经关闭了防火墙,也多次删除hadoop.tmp.dir dfs.name.dir dfs.data.dir 对应的文件,并且多次hadoop namenode -format,依然如此,但如果我把 hadoop fs -put README.txt /input 放到slave上执行,不会报错,可以复制过去,三台机器都有这个文件,请各位大神帮忙解答,已经困扰我好几天了。
hadoop2.6搭建 格式化出现错误
log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /var/log/hadoop/hadoop/hdfs-audit.log (没有那个文件或目录) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:221) at java.io.FileOutputStream.<init>(FileOutputStream.java:142) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) 15/10/12 16:15:27 WARN namenode.NameNode: Encountered exception during format: java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 FATAL namenode.NameNode: Exception in namenode join java.io.IOException: Cannot remove current directory: /hadoop/hdfs/namenode/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:332) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:870) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1281) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1395) 15/10/12 16:15:27 INFO util.ExitUtil: Exiting with status 1 15/10/12 16:15:27 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************
Hadoop 1.0.2中mapreduce的版本是1还是2
对hadoop的哥哥高版本已经混淆了,最近有个实验要做,不清楚hadoop1.0.2中的资源分配单位是用container还是slot
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
使用 Docker 部署 Spring Boot 项目
Docker 技术发展为微服务落地提供了更加便利的环境,使用 Docker 部署 Spring Boot 其实非常简单,这篇文章我们就来简单学习下。首先构建一个简单的 S...
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
redis分布式锁,面试官请随便问,我都会
文章有点长并且绕,先来个图片缓冲下! 前言 现在的业务场景越来越复杂,使用的架构也就越来越复杂,分布式、高并发已经是业务要求的常态。像腾讯系的不少服务,还有CDN优化、异地多备份等处理。 说到分布式,就必然涉及到分布式锁的概念,如何保证不同机器不同线程的分布式锁同步呢? 实现要点 互斥性,同一时刻,智能有一个客户端持有锁。 防止死锁发生,如果持有锁的客户端崩溃没有主动释放锁,也要保证锁可以正常释...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
Java世界最常用的工具类库
Apache Commons Apache Commons有很多子项目 Google Guava 参考博客
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
为什么要学数据结构?
一、前言 在可视化化程序设计的今天,借助于集成开发环境可以很快地生成程序,程序设计不再是计算机专业人员的专利。很多人认为,只要掌握几种开发工具就可以成为编程高手,其实,这是一种误解。要想成为一个专业的开发人员,至少需要以下三个条件: 1) 能够熟练地选择和设计各种数据结构和算法 2) 至少要能够熟练地掌握一门程序设计语言 3) 熟知所涉及的相关应用领域的知识 其中,后两个条件比较容易实现,而第一个...
Android 9.0 init 启动流程
阅读五分钟,每日十点,和您一起终身学习,这里是程序员Android本篇文章主要介绍Android开发中的部分知识点,通过阅读本篇文章,您将收获以下内容:一、启动流程概述一、 启动流程概述Android启动流程跟Linux启动类似,大致分为如下五个阶段。1.开机上电,加载固化的ROM。2.加载BootLoader,拉起Android OS。3.加载Uboot,初始外设,引导Kernel启动等。...
相关热词 c#选择结构应用基本算法 c# 收到udp包后回包 c#oracle 头文件 c# 序列化对象 自定义 c# tcp 心跳 c# ice连接服务端 c# md5 解密 c# 文字导航控件 c#注册dll文件 c#安装.net
立即提问