hadoop2.7.1运行wordcount报错1639

具体日志如下,麻烦帮忙看下,谢谢
Application application_1450887330517_0001 failed 2 times due to AM Container for appattempt_1450887330517_0001_000002 exited with exitCode: 1639
For more detailed output, check application tracking page:http://Luke-PC:8088/cluster/app/application_1450887330517_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1450887330517_0001_02_000001
Exit code: 1639
Exception message: Incorrect command line arguments.
TaskExit: error (1639): ??????????????????????? Windows Installer ? SDK?
Stack trace: ExitCodeException exitCode=1639: Incorrect command line arguments.
TaskExit: error (1639): ??????????????????????? Windows Installer ? SDK?
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Shell output: Usage: task create [TASKNAME] [COMMAND_LINE] |
task createAsUser [TASKNAME] [USERNAME] [PIDFILE] [COMMAND_LINE] |
task isAlive [TASKNAME] |
task kill [TASKNAME]
task processList [TASKNAME]
Creates a new task jobobject with taskname
Creates a new task jobobject with taskname as the user provided
Checks if task jobobject is alive
Kills task jobobject
Prints to stdout a list of processes in the task
along with their resource usage. One process per line
and comma separated info per process
ProcessId,VirtualMemoryCommitted(bytes),
WorkingSetSize(bytes),CpuTime(Millisec,Kernel+User)
Container exited with a non-zero exit code 1639
Failing this attempt. Failing the application.

2个回答

你好,上面的错误处理过了吗?我也遇到同样情况,能给我指导一下处理过程吗?

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hadoop 2.2运行wordcount报错
hadoop 2.2 + jdk1.7 运行wordcount例子 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /word /ws 报错: org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1449733659077_0001_m_000000_0: Error: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.InputSplit 请各位高手指点
hadoop2.5.2无法执行wordcount及-put操作
hadoop2.5.2 一个master,两个slave,名字分别为slave1和slave2,master启动后如下: 30784 NameNode 31394 Jps 30972 SecondaryNameNode 31132 ResourceManager slave1和slave2启动后都为如下 8064 Jps 7943 NodeManager 7834 DataNode 感觉没什么异常,然后我在master上执行 hadoop fs -put README.txt /input 一直不动,最后报错 17/03/09 19:59:11 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 17/03/09 19:59:11 INFO hdfs.DFSClient: Abandoning BP-247473795-10.202.15.17-1489054138763:blk_1073741827_1003 17/03/09 19:59:11 INFO hdfs.DFSClient: Excluding datanode 10.202.15.175:50010 17/03/09 20:01:18 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 17/03/09 20:01:18 INFO hdfs.DFSClient: Abandoning BP-247473795-10.202.15.17-1489054138763:blk_1073741828_1004 17/03/09 20:01:18 INFO hdfs.DFSClient: Excluding datanode 10.202.15.174:50010 17/03/09 20:01:18 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2791) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:606) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:455) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy9.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy9.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1449) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1270) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) put: File /input/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. 所有的机器都已经关闭了防火墙,也多次删除hadoop.tmp.dir dfs.name.dir dfs.data.dir 对应的文件,并且多次hadoop namenode -format,依然如此,但如果我把 hadoop fs -put README.txt /input 放到slave上执行,不会报错,可以复制过去,三台机器都有这个文件,请各位大神帮忙解答,已经困扰我好几天了。
window系统下开发hadoop2.2出现报错
Exception in thread "main" java.io.IOException: Cannot run program "E:\hadoop-2.4.0\bin\winutils.exe": CreateProcess error=216, ӳÏñÎļþ %1 ÓÐЧ£¬µ«²»ÊÊÓÃÓڴ˼ÆË at java.lang.ProcessBuilder.start(Unknown Source) at org.apache.hadoop.util.Shell.runCommand(Shell.java:404) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.util.Shell.execCommand(Shell.java:678) at org.apache.hadoop.util.Shell.execCommand(Shell.java:661) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:435) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:277) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:344) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286) at WordCount.main(WordCount.java:84) Caused by: java.io.IOException: CreateProcess error=216, ӳÏñÎļþ %1 ÓÐЧ£¬µ«²»ÊÊÓÃÓڴ˼ÆË at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) ... 19 more
Hadoop2.2.0搭建过程中namenode初始化报错
HDFS初始化namenode报错,求大神帮帮忙!!! FATAL namenode.NameNode: Exception in namenode join java.lang.ClassCastException: com.sun.org.apache.xerces.internal.dom.DeferredElementNSImpl cannot be cast to org.w3c.dom.Text at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2111) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2001) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1918) at org.apache.hadoop.conf.Configuration.get(Configuration.java:721) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:740) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:965) at org.apache.hadoop.security.Groups.<init>(Groups.java:62) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214) at org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:275) at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:269) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 15/04/13 04:15:01 INFO util.ExitUtil: Exiting with status 1 15/04/13 04:15:02 INFO namenode.NameNode: SHUTDOWN_MSG:
Ubuntu中关于Hadoop2.6.0的安装
求教: 1.我的程序运行到 bin/hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-sources.jar 时, 发生错误:RunJar jarFile [mainClass] args..., 2.我的程序运行到 /usr/local/hadoop$ org.apache.hadoop.examples.WordCount input output 发生错误: org.apache.hadoop.examples.WordCount:未找到命令 我本来是想在hadoop没有对配置文件进行处理前,进行一下测试,但是出现这种情况,请问怎么处理,谢谢!
centos6.8搭建hadoop2.X伪分布式无法启动namenode
能够格式化节点信息,但是namenode无法启动。在日志中出现如下错误 ``` STARTUP_MSG: build = Unknown -r Unknown; compiled by 'root' on 2017-05-22T10:49Z STARTUP_MSG: java = 1.8.0_144 ************************************************************/ 2020-01-31 16:37:06,931 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2020-01-31 16:37:06,935 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 2020-01-31 16:37:07,161 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2020-01-31 16:37:07,233 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2020-01-31 16:37:07,233 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2020-01-31 16:37:07,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://hadoop101:9000 2020-01-31 16:37:07,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use hadoop101:9000 to access this namenode/service. 2020-01-31 16:37:07,409 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://huawei_mate_10-53013e4c60:50070 2020-01-31 16:37:07,457 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2020-01-31 16:37:07,464 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2020-01-31 16:37:07,469 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2020-01-31 16:37:07,473 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2020-01-31 16:37:07,475 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: The value of property bind.address must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1134) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1115) at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:398) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:351) at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:114) at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:290) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:126) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:752) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:638) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2020-01-31 16:37:07,477 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2020-01-31 16:37:07,479 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop101/192.168.117.101 ************************************************************/ ``` 主要的报错信息是 java.lang.IllegalArgumentException: The value of property bind.address must not be null core-site.xml的配置信息 <configuration> <!-- 指定HDFS中NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop101:9000</value> </property> <!-- hadoop101已经在hosts文件中配置 --> <!-- 指定Hadoop运行时产生文件的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/hadoop-2.7.2/data/tmp</value> </property> </configuration> 希望大神能够帮忙解答一下。万分感谢感谢
kettle 7.1版本,连接hadoop2.7.3版本,无法读取hadoop文件目录
**1.**shim使用的是跟版本匹配的HDP2.5, **2.**连接到hadoop2.7.3也显示连接成功(用户目录访问和根目录访问没有配置)。 **3.**在hadoop文件输入的时候,无法读取文件列表,(you dont seem to be getting a connection to the hadoop cluster) 但是,在连接hadoop2.2版本,就可以读取文件目录 麻烦有空的大家帮忙看一下
maven3.3.9编译hadoop2.6.5报错 帮忙解决问题
[INFO] Building Apache Hadoop Main 2.6.5 [INFO] ------------------------------------------------------------------------ Downloading: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml [WARNING] Could not validate integrity of download from http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml: Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce [WARNING] Checksum validation failed, expected <html> but is b113767b47336dcc165c5dd2222b5df4cb86b7ce for http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml Downloaded: http://mirrors.ibiblio.org/pub/mirrors/maven2/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata.xml (99 KB at 11.8 KB/sec) [WARNING] The metadata /root/.m2/repository/org/apache/maven/plugins/maven-javadoc-plugin/maven-metadata-ibiblio.org.xml is invalid: end tag name </body> must match start tag name <hr> from line 888 (position: START_TAG seen ... 08-Nov-2014 19:04 207\r\n</pre><hr></body>... @888:18) [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................. FAILURE [ 8.416 s] [INFO] Apache Hadoop Build Tools .......................... SKIPPED [INFO] Apache Hadoop Project POM .......................... SKIPPED [INFO] Apache Hadoop Annotations .......................... SKIPPED [INFO] Apache Hadoop Assemblies ........................... SKIPPED [INFO] Apache Hadoop Project Dist POM ..................... SKIPPED [INFO] Apache Hadoop Maven Plugins ........................ SKIPPED [INFO] Apache Hadoop MiniKDC .............................. SKIPPED [INFO] Apache Hadoop Auth ................................. SKIPPED [INFO] Apache Hadoop Auth Examples ........................ SKIPPED [INFO] Apache Hadoop Common ............................... SKIPPED [INFO] Apache Hadoop NFS .................................. SKIPPED [INFO] Apache Hadoop KMS .................................. SKIPPED [INFO] Apache Hadoop Common Project ....................... SKIPPED [INFO] Apache Hadoop HDFS ................................. SKIPPED [INFO] Apache Hadoop HttpFS ............................... SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED [INFO] Apache Hadoop HDFS Project ......................... SKIPPED [INFO] hadoop-yarn ........................................ SKIPPED [INFO] hadoop-yarn-api .................................... SKIPPED [INFO] hadoop-yarn-common ................................. SKIPPED [INFO] hadoop-yarn-server ................................. SKIPPED [INFO] hadoop-yarn-server-common .......................... SKIPPED [INFO] hadoop-yarn-server-nodemanager ..................... SKIPPED [INFO] hadoop-yarn-server-web-proxy ....................... SKIPPED [INFO] hadoop-yarn-server-applicationhistoryservice ....... SKIPPED [INFO] hadoop-yarn-server-resourcemanager ................. SKIPPED [INFO] hadoop-yarn-server-tests ........................... SKIPPED [INFO] hadoop-yarn-client ................................. SKIPPED [INFO] hadoop-yarn-applications ........................... SKIPPED [INFO] hadoop-yarn-applications-distributedshell .......... SKIPPED [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SKIPPED [INFO] hadoop-yarn-site ................................... SKIPPED [INFO] hadoop-yarn-registry ............................... SKIPPED [INFO] hadoop-yarn-project ................................ SKIPPED [INFO] hadoop-mapreduce-client ............................ SKIPPED [INFO] hadoop-mapreduce-client-core ....................... SKIPPED [INFO] hadoop-mapreduce-client-common ..................... SKIPPED [INFO] hadoop-mapreduce-client-shuffle .................... SKIPPED [INFO] hadoop-mapreduce-client-app ........................ SKIPPED [INFO] hadoop-mapreduce-client-hs ......................... SKIPPED [INFO] hadoop-mapreduce-client-jobclient .................. SKIPPED [INFO] hadoop-mapreduce-client-hs-plugins ................. SKIPPED [INFO] Apache Hadoop MapReduce Examples ................... SKIPPED [INFO] hadoop-mapreduce ................................... SKIPPED [INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED [INFO] Apache Hadoop Distributed Copy ..................... SKIPPED [INFO] Apache Hadoop Archives ............................. SKIPPED [INFO] Apache Hadoop Rumen ................................ SKIPPED [INFO] Apache Hadoop Gridmix .............................. SKIPPED [INFO] Apache Hadoop Data Join ............................ SKIPPED [INFO] Apache Hadoop Ant Tasks ............................ SKIPPED [INFO] Apache Hadoop Extras ............................... SKIPPED [INFO] Apache Hadoop Pipes ................................ SKIPPED [INFO] Apache Hadoop OpenStack support .................... SKIPPED [INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED [INFO] Apache Hadoop Client ............................... SKIPPED [INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED [INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED [INFO] Apache Hadoop Tools Dist ........................... SKIPPED [INFO] Apache Hadoop Tools ................................ SKIPPED [INFO] Apache Hadoop Distribution ......................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 06:03 min [INFO] Finished at: 2018-06-23T11:25:17+08:00 [INFO] Final Memory: 27M/69M [INFO] ------------------------------------------------------------------------ [ERROR] Error resolving version for plugin 'org.apache.maven.plugins:maven-javadoc-plugin' from the repositories [local (/root/.m2/repository), ibiblio.org (http://mirrors.ibiblio.org/pub/mirrors/maven2)]: Plugin not found in any plugin repository -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginVersionResolutionException You have new mail in /var/spool/mail/root
hive0.9.0+hbase0.96.2+hadoop2.2.0整合执行查询hql报错如下
hive> select * from hbasehive_table; OK Exception in thread "main" java.lang.InstantiationError: org.apache.hadoop.mapreduce.JobContext at org.apache.hadoop.hive.shims.Hadoop20SShims.newJobContext(Hadoop20SShims.java:58) at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:473) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:281) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:320) at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:154) at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1377) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
hadoop升级2.2.0后运行job报错Shell$ExitCodeException: id: dr.who: No such user
2013-12-03 11:34:56,590 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user dr.who 2013-12-03 11:34:56,589 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user dr.who org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user at org.apache.hadoop.util.Shell.runCommand(Shell.java:504) at org.apache.hadoop.util.Shell.run(Shell.java:417) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636) at org.apache.hadoop.util.Shell.execCommand(Shell.java:725) at org.apache.hadoop.util.Shell.execCommand(Shell.java:708) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52) at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50) at org.apache.hadoop.security.Groups.getGroups(Groups.java:95) at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:63) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618) at org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1310) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
hadoop2.5.2格式化hdfs报错
16/05/31 20:30:38 WARN namenode.FSEditLog: No class configured for node2, dfs.namenode.edits.journal-plugin.node2 is empty 16/05/31 20:30:38 FATAL namenode.NameNode: Exception in namenode join java.lang.IllegalArgumentException: No class configured for node2 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getJournalClass(FSEditLog.java:1532) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1546) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:267) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:233) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:920) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473) 16/05/31 20:30:38 INFO util.ExitUtil: Exiting with status 1 hdfs-site.xml配置如下 <configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>node1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>node2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>node1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>node2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>node2:8485;node3:8485;node4:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/hadoop/journalnodedata</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration>
hadoop2.7.2搭建分布式环境,格式化后,namenode没启动成功
第一步:执行hadoop namenode -formate STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z STARTUP_MSG: java = 1.7.0_76 ************************************************************/ 16/08/02 04:26:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/08/02 04:26:16 INFO namenode.NameNode: createNameNode [-formate] Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-rollback] | [-rollingUpgrade <rollback|downgrade|started> ] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force] ] | [-metadataVersion ] ] 16/08/02 04:26:16 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100 第二步:执行start-all.sh 结果如下: [root@master sbin]# sh start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 16/08/02 05:45:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master] master: starting namenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-master.out slave2: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave2.out slave3: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave3.out slave1: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-secondarynamenode-master.out 16/08/02 05:46:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-master.out slave2: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave2.out slave3: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave3.out slave1: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave1.out [root@master sbin]# jps 2613 ResourceManager 2467 SecondaryNameNode 2684 Jps namenode日志: 2016-08-02 05:49:49,910 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,928 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-08-02 05:49:49,928 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 2016-08-02 05:49:49,930 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 2016-08-02 05:49:49,934 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-08-02 05:49:49,935 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,949 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-08-02 05:49:49,961 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100
hadoop2.6使用snappy.报错
hadoop2.6使用snappy.报错native snappy library not available: this version of libhadoop was built without snappy support. ![图片说明](https://img-ask.csdn.net/upload/201505/05/1430759464_528696.png) hadoop里面明明已经显示启用了snappy
window eclipse 运行wordcount报错,请大侠指点
dows8当中eclipse运行wordcount程序报错 hadoop安装在vmware当中的centos当中 报错日志如下,请大侠指点 log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Exception in thread "main" java.io.IOException: Failed on local exception: java.net.SocketException: Network is unreachable: no further information; Host Details : local host is: "hadoop/192.168.182.1"; destination host is: "0.0.0.192":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:562) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314) at WordCount.main(WordCount.java:58) Caused by: java.net.SocketException: Network is unreachable: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) at org.apache.hadoop.ipc.Client.call(Client.java:1438) ... 28 more 在window能够ping通centos,在centos当中也能ping通windows,怎么会出现网络不可用,请大侠指点
hadoop wordcount报错192.168.79.172 to  :54895 拒绝连接
各位大神,我是hadoop新手,在自己电脑安装好hadoop后,运行wordcount报错,报错已困扰两天,希望有大神可以帮解决。 报错日志如下: [hadoop@hadoop0 mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output 17/04/14 13:49:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/04/14 13:49:48 INFO client.RMProxy: Connecting to ResourceManager at hadoop0/192.168.79.172:8032 17/04/14 13:49:50 INFO input.FileInputFormat: Total input paths to process : 2 17/04/14 13:49:50 INFO mapreduce.JobSubmitter: number of splits:2 17/04/14 13:49:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1492077890345_0003 17/04/14 13:49:51 INFO impl.YarnClientImpl: Submitted application application_1492077890345_0003 17/04/14 13:49:51 INFO mapreduce.Job: The url to track the job: http://hadoop0:8088/proxy/application_1492077890345_0003/ 17/04/14 13:49:51 INFO mapreduce.Job: Running job: job_1492077890345_0003 17/04/14 13:50:12 INFO mapreduce.Job: Job job_1492077890345_0003 running in uber mode : false 17/04/14 13:50:12 INFO mapreduce.Job: map 0% reduce 0% 17/04/14 13:50:12 INFO mapreduce.Job: Job job_1492077890345_0003 failed with state FAILED due to: Application application_1492077890345_0003 failed 2 times due to Error launching appattempt_1492077890345_0003_000002. Got exception: java.net.ConnectException: Call From hadoop0/192.168.79.172 to  :54895 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1407) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy83.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) at org.apache.hadoop.ipc.Client.call(Client.java:1446) ... 9 more . Failing the application. 17/04/14 13:50:12 INFO mapreduce.Job: Counters: 0 metrics文件内容如下: ![图片说明](https://img-ask.csdn.net/upload/201704/14/1492150334_860585.png) 另外,在报错日志中发下这么句话Call From hadoop0/192.168.79.172 to  :54895 failed ,为什么54895只有端口前面没有IP,是不是环境没有配置正确?
求救啊!Hadoop 2.2.0 搭建集群 启动hdfs时候 namenode 启动后报空指针
日志如下: 2015-02-07 01:01:46,610 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring NN shutdown. Shutting down immediately. java.lang.NullPointerException at org.apache.hadoop.hdfs.DFSUtil.substituteForWildcardAddress(DFSUtil.java:942) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.getHttpAddress(StandbyCheckpointer.java:108) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.setNameNodeAddresses(StandbyCheckpointer.java:90) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.<init>(StandbyCheckpointer.java:76) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startStandbyServices(FSNamesystem.java:994) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startStandbyServices(NameNode.java:1456) at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.enterState(StandbyState.java:58) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:686) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 2015-02-07 01:01:46,614 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-02-07 01:01:46,620 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 我就不明白了,为什么就一直报空指针,而且,远程调试的时候就不会报错,已经凌乱了。
hadoop3.1.1 使用idea提交wordcount任务到yarn上报错
我用local来跑的话是ok的 但是用yarn来跑就出错了 我的hadoop是部署在虚拟机上的 下面贴报错代码 ![图片说明](https://img-ask.csdn.net/upload/201909/25/1569345769_379067.png) 我已经按照报错提示的将一些参数设置好了 ![图片说明](https://img-ask.csdn.net/upload/201909/25/1569345867_269148.png) 但是还是报这个错 是在找不到眉目 我贴一下我yarn上传的代码片段吧 ![图片说明](https://img-ask.csdn.net/upload/201909/25/1569346049_348203.png) 希望大佬们能帮个忙
Myeclipse 10.7 connect hadoop2.7.1
打开Map/Reduce perspective : problems opening perspective 'org.apache.hadoop.eclipse.Perspective' 打开 Hadoop Map/Reduce: Unable to create the selected preference page. org/apache/hadoop/eclipse/preferences/MapReducePreferencePage : Unsupported major.minor version 51.0 jdk版本已经换为1.7了,编译级别也改了 求大神。。。。
Hadoop2.7.3实现Kmeans算法遇到的问题
环境配置: Hadoop2.7.3集群JDK:1.7 eclipse JDK:1.7 一个master,两个node 从网上找了一个Kmeans算法的Hadoop实现,然后自己稍微修改了下 本地调试一直出现下面这句话:没有任何反应,求指导 2017-04-26 22:19:26,539 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
Linux(服务器编程):15---两种高效的事件处理模式(reactor模式、proactor模式)
前言 同步I/O模型通常用于实现Reactor模式 异步I/O模型则用于实现Proactor模式 最后我们会使用同步I/O方式模拟出Proactor模式 一、Reactor模式 Reactor模式特点 它要求主线程(I/O处理单元)只负责监听文件描述符上是否有事件发生,有的话就立即将时间通知工作线程(逻辑单元)。除此之外,主线程不做任何其他实质性的工作 读写数据,接受新的连接,以及处...
为什么要学数据结构?
一、前言 在可视化化程序设计的今天,借助于集成开发环境可以很快地生成程序,程序设计不再是计算机专业人员的专利。很多人认为,只要掌握几种开发工具就可以成为编程高手,其实,这是一种误解。要想成为一个专业的开发人员,至少需要以下三个条件: 1) 能够熟练地选择和设计各种数据结构和算法 2) 至少要能够熟练地掌握一门程序设计语言 3) 熟知所涉及的相关应用领域的知识 其中,后两个条件比较容易实现,而第一个...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n
进程通信方式总结与盘点
​ 进程通信是指进程之间的信息交换。这里需要和进程同步做一下区分,进程同步控制多个进程按一定顺序执行,进程通信是一种手段,而进程同步是目标。从某方面来讲,进程通信可以解决进程同步问题。 ​ 首先回顾下我们前面博文中讲到的信号量机制,为了实现进程的互斥与同步,需要在进程间交换一定的信息,因此信号量机制也可以被归为进程通信的一种方式,但是也被称为低级进程通信,主要原因为: 效率低:一次只可操作少量的...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
听说了吗?阿里双11作战室竟1根网线都没有
双11不光是购物狂欢节,更是对技术的一次“大考”,对于阿里巴巴企业内部运营的基础保障技术而言,亦是如此。 回溯双11历史,这背后也经历过“小米加步枪”的阶段:作战室从随处是网线,交换机放地上的“一地狼藉”;到如今媲美5G的wifi网速,到现场却看不到一根网线;从当年使用商用AP(无线路由器),让光明顶双11当天断网一分钟,到全部使用阿里自研AP……阿里巴巴企业智能事业部工程师们提供的基础保障...
在阿里,40岁的奋斗姿势
在阿里,40岁的奋斗姿势 在阿里,什么样的年纪可以称为老呢?35岁? 在云网络,有这样一群人,他们的平均年龄接近40,却刚刚开辟职业生涯的第二战场。 他们的奋斗姿势是什么样的呢? 洛神赋 “翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。” 爱洛神,爱阿里云 2018年,阿里云网络产品部门启动洛神2.0升...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆  每天早上8:30推送 作者| Mr.K   编辑| Emma 来源| 技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯
程序员该看的几部电影
##1、骇客帝国(1999) 概念:在线/离线,递归,循环,矩阵等 剧情简介: 不久的将来,网络黑客尼奥对这个看似正常的现实世界产生了怀疑。 他结识了黑客崔妮蒂,并见到了黑客组织的首领墨菲斯。 墨菲斯告诉他,现实世界其实是由一个名叫“母体”的计算机人工智能系统控制,人们就像他们饲养的动物,没有自由和思想,而尼奥就是能够拯救人类的救世主。 可是,救赎之路从来都不会一帆风顺,到底哪里才是真实的世界?
入职阿里5年,他如何破解“技术债”?
简介: 作者 | 都铎 作为一名技术人,你常常会听到这样的话: “先快速上线” “没时间改” “再缓一缓吧” “以后再解决” “先用临时方案处理” …… 当你埋下的坑越来越多,不知道哪天哪位同学就会踩上一颗雷。特别赞同“人最大的恐惧就是未知,当技术债可说不可见的时候,才是最让人不想解决的时候。” 作为一个程序员,我们反对复制粘贴,但是我们经常会见到相似的代码,相同的二方包,甚至整个代码...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布了 2019年国民经济报告 ,报告中指出:年末中国大陆总人口(包括31个
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
2020年的1月,我辞掉了我的第一份工作
其实,这篇文章,我应该早点写的,毕竟现在已经2月份了。不过一些其它原因,或者是我的惰性、还有一些迷茫的念头,让自己迟迟没有试着写一点东西,记录下,或者说是总结下自己前3年的工作上的经历、学习的过程。 我自己知道的,在写自己的博客方面,我的文笔很一般,非技术类的文章不想去写;另外我又是一个还比较热衷于技术的人,而平常复杂一点的东西,如果想写文章写的清楚点,是需要足够...
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
Java坑人面试题系列: 包装类(中级难度)
Java Magazine上面有一个专门坑人的面试题系列: https://blogs.oracle.com/javamagazine/quiz-2。 这些问题的设计宗旨,主要是测试面试者对Java语言的了解程度,而不是为了用弯弯绕绕的手段把面试者搞蒙。 如果你看过往期的问题,就会发现每一个都不简单。 这些试题模拟了认证考试中的一些难题。 而 “中级(intermediate)” 和 “高级(ad
深度学习入门笔记(十八):卷积神经网络(一)
欢迎关注WX公众号:【程序员管小亮】 专栏——深度学习入门笔记 声明 1)该文章整理自网上的大牛和机器学习专家无私奉献的资料,具体引用的资料请看参考文献。 2)本文仅供学术交流,非商用。所以每一部分具体的参考资料并没有详细对应。如果某部分不小心侵犯了大家的利益,还望海涵,并联系博主删除。 3)博主才疏学浅,文中如有不当之处,请各位指出,共同进步,谢谢。 4)此属于第一版本,若有错误,还需继续修正与...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
节后首个工作日,企业们集体开晨会让钉钉挂了
By 超神经场景描述:昨天 2 月 3 日,是大部分城市号召远程工作的第一天,全国有接近 2 亿人在家开始远程办公,钉钉上也有超过 1000 万家企业活跃起来。关键词:十一出行 人脸...
Java基础知识点梳理
Java基础知识点梳理 摘要: 虽然已经在实际工作中经常与java打交道,但是一直没系统地对java这门语言进行梳理和总结,掌握的知识也比较零散。恰好利用这段时间重新认识下java,并对一些常见的语法和知识点做个总结与回顾,一方面为了加深印象,方便后面查阅,一方面为了学好java打下基础。 Java简介 java语言于1995年正式推出,最开始被命名为Oak语言,由James Gosling(詹姆
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
你也能看懂的:蒙特卡罗方法
蒙特卡罗方法,也称统计模拟方法,是1940年代中期由于科学技术的发展和电子计算机的发明,而提出的一种以概率统计理论为指导的数值计算方法。是指使用随机数(或更常见的伪随机数)来解决很多计算问题的方法 蒙特卡罗方法可以粗略地分成两类:一类是所求解的问题本身具有内在的随机性,借助计算机的运算能力可以直接模拟这种随机的过程。另一种类型是所求解问题可以转化为某种随机分布的特征数,比如随机事件出现的概率,或...
相关热词 c# 为空 判断 委托 c#记事本颜色 c# 系统默认声音 js中调用c#方法参数 c#引入dll文件报错 c#根据名称实例化 c#从邮件服务器获取邮件 c# 保存文件夹 c#代码打包引用 c# 压缩效率
立即提问