Hadoop搭建环境报错,实在找不到方法

找了很久都不知道解决方法,大家帮帮我吧,谢谢~

STARTUP_MSG: java = 1.6.0_45
************************************************************/
15/11/28 08:52:23 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/11/28 08:52:23 INFO namenode.NameNode: createNameNode [-format]
15/11/28 08:52:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-0f5cc549-bfbb-45b2-a735-c3500a2e83cf
15/11/28 08:52:25 INFO namenode.FSNamesystem: fsLock is fair:true
15/11/28 08:52:25 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/11/28 08:52:25 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/11/28 08:52:25 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/11/28 08:52:25 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Nov 28 08:52:25
15/11/28 08:52:25 INFO util.GSet: Computing capacity for map BlocksMap
15/11/28 08:52:25 INFO util.GSet: VM type = 32-bit
15/11/28 08:52:25 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/11/28 08:52:25 INFO util.GSet: capacity = 2^22 = 4194304 entries
15/11/28 08:52:25 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/11/28 08:52:25 INFO blockmanagement.BlockManager: defaultReplication = 1
15/11/28 08:52:25 INFO blockmanagement.BlockManager: maxReplication = 512
15/11/28 08:52:25 INFO blockmanagement.BlockManager: minReplication = 1
15/11/28 08:52:25 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/11/28 08:52:25 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/11/28 08:52:25 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/11/28 08:52:25 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/11/28 08:52:25 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/11/28 08:52:25 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
15/11/28 08:52:25 INFO namenode.FSNamesystem: supergroup = supergroup
15/11/28 08:52:25 INFO namenode.FSNamesystem: isPermissionEnabled = false
15/11/28 08:52:25 INFO namenode.FSNamesystem: HA Enabled: false
15/11/28 08:52:25 INFO namenode.FSNamesystem: Append Enabled: true
15/11/28 08:52:26 INFO util.GSet: Computing capacity for map INodeMap
15/11/28 08:52:26 INFO util.GSet: VM type = 32-bit
15/11/28 08:52:26 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/11/28 08:52:26 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/11/28 08:52:26 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/11/28 08:52:26 INFO util.GSet: Computing capacity for map cachedBlocks
15/11/28 08:52:26 INFO util.GSet: VM type = 32-bit
15/11/28 08:52:26 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/11/28 08:52:26 INFO util.GSet: capacity = 2^19 = 524288 entries
15/11/28 08:52:26 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/11/28 08:52:26 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/11/28 08:52:26 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/11/28 08:52:26 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/11/28 08:52:26 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/11/28 08:52:26 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/11/28 08:52:26 INFO util.GSet: VM type = 32-bit
15/11/28 08:52:26 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
15/11/28 08:52:26 INFO util.GSet: capacity = 2^16 = 65536 entries
15/11/28 08:52:26 INFO namenode.NNConf: ACLs enabled? false
15/11/28 08:52:26 INFO namenode.NNConf: XAttrs enabled? true
15/11/28 08:52:26 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/11/28 08:52:26 FATAL namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.(File.java:368)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:327)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:261)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:233)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:920)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
15/11/28 08:52:26 INFO util.ExitUtil: Exiting with status 1
15/11/28 08:52:26 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.131.2
************************************************************/

2个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hadoop环境搭建,报错没有那个文件或目录?
在执行./hadoop namenode -format,初始化的时候一直报错,文件是能找到的,但是一直报错说找不到,确定用的都是64位的,求问大神怎么解决 ![图片说明](https://img-ask.csdn.net/upload/201811/26/1543230388_813472.png
win10下编译hadoop eclipse plugin报错
win10下编译hadoop eclipse plugin报错,请求各位大佬帮忙看一下 ``` D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin>ant jar -Dversion=2.8.3 -Declipse.home=C:\Users\Daybr\eclipse\java-neon\eclipse -Dhadoop.home=D:\hadoop-2.8.3\hadoop-2.8.3 Buildfile: D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\build.xml check-contrib: init: [echo] contrib: eclipse-plugin init-contrib: ivy-probe-antlib: ivy-init-antlib: ivy-init: [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:configure] :: loading settings :: file = D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\ivy\ivysettings.xml ivy-resolve-common: ivy-retrieve-common: [ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead [ivy:cachepath] :: loading settings :: file = D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\ivy\ivysettings.xml compile: [echo] contrib: eclipse-plugin [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\build.xml:76: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds [javac] Compiling 45 source files to D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\build\contrib\eclipse-plugin\classes [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\Activator.java:22: 错误: 程序包org.eclipse.ui.plugin不存在 [javac] import org.eclipse.ui.plugin.AbstractUIPlugin; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\Activator.java:28: 错误: 找不到符号 [javac] public class Activator extends AbstractUIPlugin { [javac] ^ [javac] 符号: 类 AbstractUIPlugin [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ErrorMessageDialog.java:22: 错误: 程序包org.eclipse.swt.widgets不存在 [javac] import org.eclipse.swt.widgets.Display; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:21: 错误: 程序包org.eclipse.debug.ui不存在 [javac] import org.eclipse.debug.ui.IDebugUIConstants; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:22: 错误: 程序包org.eclipse.jdt.ui不存在 [javac] import org.eclipse.jdt.ui.JavaUI; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:23: 错误: 程序包org.eclipse.ui不存在 [javac] import org.eclipse.ui.IFolderLayout; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:24: 错误: 程序包org.eclipse.ui不存在 [javac] import org.eclipse.ui.IPageLayout; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:25: 错误: 程序包org.eclipse.ui不存在 [javac] import org.eclipse.ui.IPerspectiveFactory; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:26: 错误: 程序包org.eclipse.ui.console不存在 [javac] import org.eclipse.ui.console.IConsoleConstants; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:34: 错误: 找不到符号 [javac] public class HadoopPerspectiveFactory implements IPerspectiveFactory { [javac] ^ [javac] 符号: 类 IPerspectiveFactory [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\HadoopPerspectiveFactory.java:36: 错误: 找不到符号 [javac] public void createInitialLayout(IPageLayout layout) { [javac] ^ [javac] 符号: 类 IPageLayout [javac] 位置: 类 HadoopPerspectiveFactory [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:25: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.FileLocator; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:26: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.Path; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:28: 错误: 程序包org.eclipse.swt.graphics不存在 [javac] import org.eclipse.swt.graphics.Image; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:29: 错误: 程序包org.eclipse.ui不存在 [javac] import org.eclipse.ui.ISharedImages; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:30: 错误: 程序包org.eclipse.ui不存在 [javac] import org.eclipse.ui.PlatformUI; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:31: 错误: 程序包org.eclipse.ui.plugin不存在 [javac] import org.eclipse.ui.plugin.AbstractUIPlugin; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:46: 错误: 找不到符号 [javac] private ISharedImages sharedImages = [javac] ^ [javac] 符号: 类 ISharedImages [javac] 位置: 类 ImageLibrary [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:70: 错误: 找不到符号 [javac] public static Image getImage(String name) { [javac] ^ [javac] 符号: 类 Image [javac] 位置: 类 ImageLibrary [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:95: 错误: 找不到符号 [javac] private Map<String, Image> imageMap = new HashMap<String, Image>(); [javac] ^ [javac] 符号: 类 Image [javac] 位置: 类 ImageLibrary [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\ImageLibrary.java:154: 错误: 找不到符号 [javac] private Image getImageByName(String name) { [javac] ^ [javac] 符号: 类 Image [javac] 位置: 类 ImageLibrary [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:29: 错误: 程序包org.eclipse.core.resources不存在 [javac] import org.eclipse.core.resources.IProject; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:30: 错误: 程序包org.eclipse.core.resources不存在 [javac] import org.eclipse.core.resources.IProjectNature; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:31: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.CoreException; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:32: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.NullProgressMonitor; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:33: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.Path; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:34: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.QualifiedName; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:35: 错误: 程序包org.eclipse.jdt.core不存在 [javac] import org.eclipse.jdt.core.IClasspathEntry; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:36: 错误: 程序包org.eclipse.jdt.core不存在 [javac] import org.eclipse.jdt.core.IJavaProject; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:37: 错误: 程序包org.eclipse.jdt.core不存在 [javac] import org.eclipse.jdt.core.JavaCore; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:44: 错误: 找不到符号 [javac] public class MapReduceNature implements IProjectNature { [javac] ^ [javac] 符号: 类 IProjectNature [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:48: 错误: 找不到符号 [javac] private IProject project; [javac] ^ [javac] 符号: 类 IProject [javac] 位置: 类 MapReduceNature [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:56: 错误: 找不到符号 [javac] public void configure() throws CoreException { [javac] ^ [javac] 符号: 类 CoreException [javac] 位置: 类 MapReduceNature [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:130: 错误: 找不到符号 [javac] public void deconfigure() throws CoreException { [javac] ^ [javac] 符号: 类 CoreException [javac] 位置: 类 MapReduceNature [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:137: 错误: 找不到符号 [javac] public IProject getProject() { [javac] ^ [javac] 符号: 类 IProject [javac] 位置: 类 MapReduceNature [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\MapReduceNature.java:145: 错误: 找不到符号 [javac] public void setProject(IProject project) { [javac] ^ [javac] 符号: 类 IProject [javac] 位置: 类 MapReduceNature [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:21: 错误: 程序包org.eclipse.core.resources不存在 [javac] import org.eclipse.core.resources.IFile; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:22: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.CoreException; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:23: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.IProgressMonitor; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:24: 错误: 程序包org.eclipse.jdt.core不存在 [javac] import org.eclipse.jdt.core.IJavaElement; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:25: 错误: 程序包org.eclipse.jdt.internal.ui.wizards不存在 [javac] import org.eclipse.jdt.internal.ui.wizards.NewElementWizard; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:28: 错误: 程序包org.eclipse.ui不存在 [javac] import org.eclipse.ui.INewWizard; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:29: 错误: 程序包org.eclipse.ui不存在 [javac] import org.eclipse.ui.IWorkbench; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:36: 错误: 找不到符号 [javac] public class NewDriverWizard extends NewElementWizard implements INewWizard, [javac] ^ [javac] 符号: 类 NewElementWizard [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:36: 错误: 找不到符号 [javac] public class NewDriverWizard extends NewElementWizard implements INewWizard, [javac] ^ [javac] 符号: 类 INewWizard [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:23: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.CoreException; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:24: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.FileLocator; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:25: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.IProgressMonitor; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:26: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.IStatus; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:27: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.Path; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:28: 错误: 程序包org.eclipse.jdt.core不存在 [javac] import org.eclipse.jdt.core.IType; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:29: 错误: 程序包org.eclipse.jdt.core不存在 [javac] import org.eclipse.jdt.core.JavaModelException; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:30: 错误: 程序包org.eclipse.jdt.core.search不存在 [javac] import org.eclipse.jdt.core.search.SearchEngine; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:31: 错误: 程序包org.eclipse.jdt.ui不存在 [javac] import org.eclipse.jdt.ui.IJavaElementSearchConstants; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:32: 错误: 程序包org.eclipse.jdt.ui不存在 [javac] import org.eclipse.jdt.ui.JavaUI; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:33: 错误: 程序包org.eclipse.jdt.ui.wizards不存在 [javac] import org.eclipse.jdt.ui.wizards.NewTypeWizardPage; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:38: 错误: 程序包org.eclipse.swt不存在 [javac] import org.eclipse.swt.SWT; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:39: 错误: 程序包org.eclipse.swt.layout不存在 [javac] import org.eclipse.swt.layout.GridData; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:40: 错误: 程序包org.eclipse.swt.layout不存在 [javac] import org.eclipse.swt.layout.GridLayout; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:41: 错误: 程序包org.eclipse.swt.widgets不存在 [javac] import org.eclipse.swt.widgets.Button; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:42: 错误: 程序包org.eclipse.swt.widgets不存在 [javac] import org.eclipse.swt.widgets.Composite; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:43: 错误: 程序包org.eclipse.swt.widgets不存在 [javac] import org.eclipse.swt.widgets.Event; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:44: 错误: 程序包org.eclipse.swt.widgets不存在 [javac] import org.eclipse.swt.widgets.Label; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:45: 错误: 程序包org.eclipse.swt.widgets不存在 [javac] import org.eclipse.swt.widgets.Listener; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:46: 错误: 程序包org.eclipse.swt.widgets不存在 [javac] import org.eclipse.swt.widgets.Text; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:47: 错误: 程序包org.eclipse.ui.dialogs不存在 [javac] import org.eclipse.ui.dialogs.SelectionDialog; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:54: 错误: 找不到符号 [javac] public class NewDriverWizardPage extends NewTypeWizardPage { [javac] ^ [javac] 符号: 类 NewTypeWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:43: 错误: 找不到符号 [javac] public void run(IProgressMonitor monitor) { [javac] ^ [javac] 符号: 类 IProgressMonitor [javac] 位置: 类 NewDriverWizard [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:60: 错误: 找不到符号 [javac] public void init(IWorkbench workbench, IStructuredSelection selection) { [javac] ^ [javac] 符号: 类 IWorkbench [javac] 位置: 类 NewDriverWizard [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:90: 错误: 找不到符号 [javac] protected void finishPage(IProgressMonitor monitor) [javac] ^ [javac] 符号: 类 IProgressMonitor [javac] 位置: 类 NewDriverWizard [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:91: 错误: 找不到符号 [javac] throws InterruptedException, CoreException { [javac] ^ [javac] 符号: 类 CoreException [javac] 位置: 类 NewDriverWizard [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizard.java:96: 错误: 找不到符号 [javac] public IJavaElement getCreatedElement() { [javac] ^ [javac] 符号: 类 IJavaElement [javac] 位置: 类 NewDriverWizard [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:55: 错误: 找不到符号 [javac] private Button isCreateMapMethod; [javac] ^ [javac] 符号: 类 Button [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:57: 错误: 找不到符号 [javac] private Text reducerText; [javac] ^ [javac] 符号: 类 Text [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:59: 错误: 找不到符号 [javac] private Text mapperText; [javac] ^ [javac] 符号: 类 Text [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:85: 错误: 找不到符号 [javac] public void createType(IProgressMonitor monitor) throws CoreException, [javac] ^ [javac] 符号: 类 IProgressMonitor [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:85: 错误: 找不到符号 [javac] public void createType(IProgressMonitor monitor) throws CoreException, [javac] ^ [javac] 符号: 类 CoreException [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:91: 错误: 找不到符号 [javac] protected void createTypeMembers(final IType newType, ImportsManager imports, [javac] ^ [javac] 符号: 类 IType [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:91: 错误: 找不到符号 [javac] protected void createTypeMembers(final IType newType, ImportsManager imports, [javac] ^ [javac] 符号: 类 ImportsManager [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:92: 错误: 找不到符号 [javac] final IProgressMonitor monitor) throws CoreException { [javac] ^ [javac] 符号: 类 IProgressMonitor [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:92: 错误: 找不到符号 [javac] final IProgressMonitor monitor) throws CoreException { [javac] ^ [javac] 符号: 类 CoreException [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:145: 错误: 找不到符号 [javac] public void createControl(Composite parent) { [javac] ^ [javac] 符号: 类 Composite [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:199: 错误: 找不到符号 [javac] private void createMapperControls(Composite composite) { [javac] ^ [javac] 符号: 类 Composite [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:204: 错误: 找不到符号 [javac] private void createReducerControls(Composite composite) { [javac] ^ [javac] 符号: 类 Composite [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:209: 错误: 找不到符号 [javac] private Text createBrowseClassControl(final Composite composite, [javac] ^ [javac] 符号: 类 Composite [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewDriverWizardPage.java:209: 错误: 找不到符号 [javac] private Text createBrowseClassControl(final Composite composite, [javac] ^ [javac] 符号: 类 Text [javac] 位置: 类 NewDriverWizardPage [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:29: 错误: 程序包org.eclipse.core.resources不存在 [javac] import org.eclipse.core.resources.IProject; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:30: 错误: 程序包org.eclipse.core.resources不存在 [javac] import org.eclipse.core.resources.IProjectDescription; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:31: 错误: 程序包org.eclipse.core.resources不存在 [javac] import org.eclipse.core.resources.ResourcesPlugin; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:32: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.CoreException; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:33: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.IConfigurationElement; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:34: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.IExecutableExtension; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:36: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.NullProgressMonitor; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:37: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.Path; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:38: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.QualifiedName; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:39: 错误: 程序包org.eclipse.core.runtime不存在 [javac] import org.eclipse.core.runtime.SubProgressMonitor; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:40: 错误: 程序包org.eclipse.jdt.ui.wizards不存在 [javac] import org.eclipse.jdt.ui.wizards.NewJavaProjectWizardPage; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:49: 错误: 程序包org.eclipse.swt不存在 [javac] import org.eclipse.swt.SWT; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:50: 错误: 程序包org.eclipse.swt.events不存在 [javac] import org.eclipse.swt.events.SelectionEvent; [javac] ^ [javac] D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\NewMapReduceProjectWizard.java:51: 错误: 程序包org.eclipse.swt.events不存在 [javac] import org.eclipse.swt.events.SelectionListener; [javac] ^ [javac] 注: D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\dfs\DFSFolder.java使用或覆盖了已过时的 API。 [javac] 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 [javac] 注: D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\src\java\org\apache\hadoop\eclipse\actions\DFSActionImpl.java使用了未经检查或不安全的操作。 [javac] 注: 有关详细信息, 请使用 -Xlint:unchecked 重新编译。 [javac] 注: 某些消息已经过简化; 请使用 -Xdiags:verbose 重新编译以获得完整输出 [javac] 100 个错误 BUILD FAILED D:\hadoop2x-eclipse-plugin-master-master\hadoop2x-eclipse-plugin-master-master\src\contrib\eclipse-plugin\build.xml:76: Compile failed; see the compiler error output for details. Total time: 4 seconds ```
hadoop hdfs 挂载报错
hadoop hdfs挂载在linux上市报错fuse-dfs didn't recognize /hadoop/hdfs,-2,然后打开/hadoop/hdfs时报输入输出目录错误
萌妹子 求解hadoop集群搭建 ZKFC报错
hdfs zkfc -formatZK执行后: WARNING: Before proceeding, ensure that all HDFS services and failover controllers are stopped! =============================================== Proceed formatting /hadoop-ha/mycluster? (Y or N) 16/02/26 01:18:56 INFO ha.ActiveStandbyElector: Session connected. y 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK... 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK. 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK. 16/02/26 01:19:12 INFO zookeeper.ClientCnxn: EventThread shut down 16/02/26 01:19:12 INFO zookeeper.ZooKeeper: Session: 0x153196bc9790000 closed 结果ZKFC还是没起来,然后我尝试: [root@h1 ~]# hadoop-daemon.sh start DFSZKFailoverController ......... Error: Could not find or load main class DFSZKFailoverController 但仍然报错!
windows下搭建Hadoop开发环境问题
最近想学学hadoop 照着网上的帖子搭建个环境 但在执行 bin/hadoop namenode -format 命令时一直报错 ![图片说明](https://img-ask.csdn.net/upload/201503/17/1426599008_924486.jpg) hadoop版本:1.2.0 hadoop-env.sh中配置 export JAVA_HOME=/cygdrive/c/Java/jdk1.6.0_43 哪位高手给看看到底什么问题。
hadoop datanode日志报错
2016-04-10 11:35:08,998 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService java.io.EOFException: End of File Exception between local host is: "master/10.13.6.186"; destination host is: "master":9000; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
hadoop执行wordcount报错
WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 17/05/03 02:35:31 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) 17/05/03 02:35:31 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null 17/05/03 02:35:31 WARN hdfs.DFSClient: Could not get block locations. Source file "/zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar" - Aborting... 17/05/03 02:35:31 INFO mapred.JobClient: Cleaning up the staging area hdfs://192.168.136.131:9000/zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001 17/05/03 02:35:31 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) 17/05/03 02:35:31 ERROR hdfs.DFSClient: Failed to close file /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /zxc/hdfs/tmp/mapred/staging/root/.staging/job_201705030234_0001/job.jar could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) 报错内容是没有节点,但是用jps查看都已经正常启动,而且防火墙也关闭了,节点间通信也都正常
hadoop 格式化namenode报错
ubuntu 16.04 hadoop 2.6.0 初始化 hadoop namenode -format 报错说找不到namenode,但是我的环境变量配置的好像没问题啊。。 ![图片说明](https://img-ask.csdn.net/upload/201609/11/1473599781_110258.png) ![图片说明](https://img-ask.csdn.net/upload/201609/11/1473599831_458409.png) 求大神赐教
hadoop使用yarn运行jar 报java.lang.ClassNotFoundException 找不到类 (找不到的不是主类)
1、写了一个数据分析的程序,用idea打成jar包,依赖jar都打进去了 ![图片说明](https://img-ask.csdn.net/upload/201911/03/1572779664_439750.png) 已经设置了 job.setJarByClass(CountDurationRunner.class); 2、开启hadoop zookeeper 和hbase集群 3、yarn运行jar : $ /opt/module/hadoop-2.7.2/bin/yarn jar ct_analysis.jar runner.CountDurationRunner 报错截图:![图片说明](https://img-ask.csdn.net/upload/201911/03/1572779908_781957.png) CountDurationRunner类代码: ``` package runner; import kv.key.ComDimension; //就是这里第一个就没找到 import kv.value.CountDurationValue; import mapper.CountDurationMapper; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import outputformat.MysqlOutputFormat; import reducer.CountDurationReducer; import java.io.IOException; public class CountDurationRunner implements Tool { private Configuration conf = null; @Override public void setConf(Configuration conf) { this.conf = HBaseConfiguration.create(conf); } @Override public Configuration getConf() { return this.conf; } @Override public int run(String[] args) throws Exception { //得到conf Configuration conf = this.getConf(); //实例化job Job job = Job.getInstance(conf); job.setJarByClass(CountDurationRunner.class); //组装Mapper InputFormat initHbaseInputConfig(job); //组装Reducer outputFormat initHbaseOutputConfig(job); return job.waitForCompletion(true) ? 0 : 1; } private void initHbaseOutputConfig(Job job) { Connection connection = null; Admin admin = null; String tableName = "ns_ct:calllog"; try { connection = ConnectionFactory.createConnection(job.getConfiguration()); admin = connection.getAdmin(); if(!admin.tableExists(TableName.valueOf(tableName))) throw new RuntimeException("没有找到目标表"); Scan scan = new Scan(); //初始化Mapper TableMapReduceUtil.initTableMapperJob( tableName, scan, CountDurationMapper.class, ComDimension.class, Text.class, job, true); }catch (IOException e){ e.printStackTrace(); }finally { try { if(admin!=null) admin.close(); if(connection!=null) connection.close(); } catch (IOException e) { e.printStackTrace(); } } } private void initHbaseInputConfig(Job job) { job.setReducerClass(CountDurationReducer.class); job.setOutputKeyClass(ComDimension.class); job.setOutputValueClass(CountDurationValue.class); job.setOutputFormatClass(MysqlOutputFormat.class); } public static void main(String[] args) { try { int status = ToolRunner.run(new CountDurationRunner(), args); System.exit(status); } catch (Exception e) { e.printStackTrace(); } } } 这问题困扰很久了,有人说classPath不对,不知道如何修改,求助! ```
hadoop伪分布式环境搭建
最近开始新学hadoop,想搭建个伪分布式环境,而自己的电脑配置不高,不能运行虚拟机,可以利用阿里云来搭建伪分布式运行环境吗?
hadoop报错什么签名问题求解决啊
![图片说明](https://img-ask.csdn.net/upload/201508/20/1440039851_971381.png) 小弟刚接触hadoop不久,准备跑个HelloWord试试,结果就报错了,找不到解决方法啊, 各位大神帮帮忙吧!
hadoop jar 运行hadoop自带wordcount报错
安装完hadoop-2.7.3之后,想试试其自带的wordcount例子,但是一直报下面的错,空间不足。。 但是我都还没开始用,怎么会出这种问题。。我是单机上运行的伪分布式,其他的都能正常运行,hdfs也是正常的,能在其中导入文件啥的,但是就是不能用hadoop jar命名运行![图片说明](https://img-ask.csdn.net/upload/201706/05/1496656412_150716.png)
window下连接hadoop集群报错
window下连接hadoop集群报错,已经把hadoop.dll放在window下的hadoop的bin目录了,system32也放了,还是无效,请问怎么办??![图片说明](https://img-ask.csdn.net/upload/201508/31/1441030623_267379.png)
windows平台安装Hadoop,启动报错No such file or directory
这几天在折腾windows下安装Hadoop,完全按照网上写的标准步骤。 参考博文:http://www.cnblogs.com/kinglau/p/3270160.html 好不容易到最后了,在启动Hadoop时,一直报错如标题。 格式化hdfs日志: $ bin/hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/07/13 23:07:53 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = 58-PC/192.168.0.102 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.0 STARTUP_MSG: classpath = D:\tools\cygwin32\home\lenovo\hadoop\etc\hadoop;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\activation-1.1.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\asm-3.2.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\avro-1.7.4.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-cli-1.2.jar;D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-codec-1.4.jar;D:\tools\cygwin32\home\lenovo\had 。。。。。。。。。。。。。。。。 STARTUP_MSG: java = 1.8.0_31 ************************************************************/ 15/07/13 23:07:53 INFO namenode.NameNode: createNameNode [-format] 15/07/13 23:07:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-052de37d-497f-4dd3-80bc-6c6c8a26d5d0 15/07/13 23:07:55 INFO namenode.FSNamesystem: No KeyProvider found. 15/07/13 23:07:55 INFO namenode.FSNamesystem: fsLock is fair:true 15/07/13 23:07:56 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 15/07/13 23:07:56 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 15/07/13 23:07:56 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 15/07/13 23:07:56 INFO blockmanagement.BlockManager: The block deletion will start around 2015 ▒▒▒▒ 13 23:07:56 15/07/13 23:07:56 INFO util.GSet: Computing capacity for map BlocksMap 15/07/13 23:07:56 INFO util.GSet: VM type = 32-bit 15/07/13 23:07:56 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 15/07/13 23:07:56 INFO util.GSet: capacity = 2^22 = 4194304 entries 15/07/13 23:07:56 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 15/07/13 23:07:56 INFO blockmanagement.BlockManager: defaultReplication = 1 15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxReplication = 512 15/07/13 23:07:56 INFO blockmanagement.BlockManager: minReplication = 1 15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 15/07/13 23:07:56 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 15/07/13 23:07:56 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 15/07/13 23:07:56 INFO blockmanagement.BlockManager: encryptDataTransfer = false 15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 15/07/13 23:07:56 INFO namenode.FSNamesystem: fsOwner = lenovo (auth:SIMPLE) 15/07/13 23:07:56 INFO namenode.FSNamesystem: supergroup = supergroup 15/07/13 23:07:56 INFO namenode.FSNamesystem: isPermissionEnabled = true 15/07/13 23:07:56 INFO namenode.FSNamesystem: HA Enabled: false 15/07/13 23:07:56 INFO namenode.FSNamesystem: Append Enabled: true 15/07/13 23:07:56 INFO util.GSet: Computing capacity for map INodeMap
hadoop eclipse插件报错no filesystem for scheme hdfs
eclipse的hadoop插件报错error no filesystem for scheme hdfs ![图片说明](https://img-ask.csdn.net/upload/201708/16/1502889438_609034.png) 环境:centOS7,hadoop2.8.1,jdk "1.8.0 141",eclipse oxygen,hadoop-eclipse-plugin 2.8.1 hadoop相关配置 core-site.xml: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/hadoop-2.8.1-src/hadoop-dist/target/hadoop-2.8.1/tmp/</value> </property> <property> <name>fs.hdfs.impl</name> <value>org.apache.hadoop.hdfs.DistributedFileSystem</value> <description>The FileSystem for hdfs: uris.</description> </property> </configuration> hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <!-- <property> <name>dfs.name.dir</name> <value>/home/hadoop/hadoopData/hdfsMetaData/</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/hadoopData/hdfsData/</value> </property> --> </configuration> mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobtracker.address</name> <value>master:9001</value> <description>jobtracker's address</description> </property> </configuration> 搞了两天了,DFS 下总报error no filesystem for scheme hdfs ,eclipse换了mars,luna版本都不行,麻烦大家给看看。。。
hadoop mapreduce报错
java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy31.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy32.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448) Job Submission failed with exception 'java.lang.RuntimeException(Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
搭建hadoop环境时用普通用户启动jps不到任何node
用root用户启动hadoop后能用jps查到各种进程,而用普通用户启动后jps不到任何nodert。。小白一个,这个问题困扰一天了。感觉应该是哪里权限出了问题。。
Hadoop分布式搭建时,在主节点JPS找不到JobTracker进程
问题在于主节点找不到JobTracker进程 主节点: [root@hadoop logs]# jps 3204 Jps 2729 NameNode 2888 SecondaryNameNode 父节点1: [root@hadoop1 local]# jps 2741 TaskTracker 2686 DataNode 2827 Jps 父节点2: [root@hadoop2 ~]# source /etc/profile [root@hadoop2 ~]# jps 2614 DataNode 2670 TaskTracker 2756 Jps mapred-site.xml的设置明明说明jobTracker的运行节点在hadoop(主节点) <configuration> <property> <name>mapred.job.tracker</name> <value>hadoop:9001</value> <description>change your own hostname</description> </property> </configuration> 问题在于主节点找不到JobTracker进程
hadoop连接mongo的程序,打包成jar后,hadoop下运行报错NoClassDefFoundError
报错信息![](https://img-ask.csdn.net/upload/201910/09/1570610725_197608.png) ![图片说明](https://img-ask.csdn.net/upload/201910/09/1570610769_40828.png) 代码![图片说明](https://img-ask.csdn.net/upload/201910/09/1570610818_23375.png) eclipse下可以正常运行
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
使用 Docker 部署 Spring Boot 项目
Docker 技术发展为微服务落地提供了更加便利的环境,使用 Docker 部署 Spring Boot 其实非常简单,这篇文章我们就来简单学习下。首先构建一个简单的 S...
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
redis分布式锁,面试官请随便问,我都会
文章有点长并且绕,先来个图片缓冲下! 前言 现在的业务场景越来越复杂,使用的架构也就越来越复杂,分布式、高并发已经是业务要求的常态。像腾讯系的不少服务,还有CDN优化、异地多备份等处理。 说到分布式,就必然涉及到分布式锁的概念,如何保证不同机器不同线程的分布式锁同步呢? 实现要点 互斥性,同一时刻,智能有一个客户端持有锁。 防止死锁发生,如果持有锁的客户端崩溃没有主动释放锁,也要保证锁可以正常释...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
Java世界最常用的工具类库
Apache Commons Apache Commons有很多子项目 Google Guava 参考博客
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
为什么要学数据结构?
一、前言 在可视化化程序设计的今天,借助于集成开发环境可以很快地生成程序,程序设计不再是计算机专业人员的专利。很多人认为,只要掌握几种开发工具就可以成为编程高手,其实,这是一种误解。要想成为一个专业的开发人员,至少需要以下三个条件: 1) 能够熟练地选择和设计各种数据结构和算法 2) 至少要能够熟练地掌握一门程序设计语言 3) 熟知所涉及的相关应用领域的知识 其中,后两个条件比较容易实现,而第一个...
Android 9.0 init 启动流程
阅读五分钟,每日十点,和您一起终身学习,这里是程序员Android本篇文章主要介绍Android开发中的部分知识点,通过阅读本篇文章,您将收获以下内容:一、启动流程概述一、 启动流程概述Android启动流程跟Linux启动类似,大致分为如下五个阶段。1.开机上电,加载固化的ROM。2.加载BootLoader,拉起Android OS。3.加载Uboot,初始外设,引导Kernel启动等。...
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问