mapreduce提交作业出现报错信息

我最近采用这种方法提交作业,把mr作业的业务逻辑打包上传至hdfs中,在eclipse中提交任务(不安装hadoop插件),报错Error: java.io.IOException: Unable to initialize any output collector,如果把mr作业的业务逻辑打包上传至hdfs节点上,就可以顺利完成。
这是为什么?
Configuration conf = new Configuration();
conf.set("mapreduce.job.name","count22");
conf.set("mapreduce.job.jar", "hdfs://172.16.200.210:8020/user/root/testjar/hadoopTest.jar");
conf.set("mapreduce.job.output.key.class", "org.apache.hadoop.io.Text");
conf.set("mapreduce.job.output.value.class", "org.apache.hadoop.io.IntWritable");
conf.set("mapred.mapper.class", "hadoop.test$Map");
conf.set("mapred.reducer.class", "hadoop.test$Reduce");
conf.set("mapred.combiner.class", "hadoop.test$Reduce");
conf.set("mapred.input.format.class", "org.apache.hadoop.mapred.TextInputFormat");
conf.set("mapred.output.format.class", "org.apache.hadoop.mapred.TextOutputFormat");
conf.set("mapreduce.input.fileinputformat.inputdir",
"hdfs://172.16.200.210:8020/user/root/input/count.txt");
conf.set("mapreduce.output.fileoutputformat.outputdir",
"hdfs://172.16.200.210:8020/user/root/sc");
JobConf jobconf=new JobConf(conf);
JobClient.runJob(jobconf);

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
运行mapreduce程序 没有报错 但是map没有输出结果?

用的这个代码:https://blog.csdn.net/daihanglai7622/article/details/84760611 本地运行此程序是正确的 有输出结果 但是 放到集群上运行 输出结果为空 查看日志 应该是 map就出错了 ![图片说明](https://img-ask.csdn.net/upload/201911/20/1574218622_84460.png) 本地上能运行 逻辑应该是没问题的 那应该是哪里出了问题? 新手求教 那个啥~我没有c币哎 好像不能悬赏…

在eclipse运行hadoop mapreduce例子报错

在终端运行hadoop带的例子正常,hadoop节点正常,错误如下 17/09/05 20:20:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/09/05 20:20:16 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 17/09/05 20:20:16 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= Exception in thread "main" java.net.ConnectException: Call From master/192.168.1.110 to localhost:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1479) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426) at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at mapreduce.Temperature.main(Temperature.java:202) Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528) at org.apache.hadoop.ipc.Client.call(Client.java:1451) ... 28 more

QuartzJob 调用 hadoop mapreduce 报错

Error: java.io.IOException: com.mysql.jdbc.Driver at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:540) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

hadoop里运行mapReduce出错

hadoop里用mapReduce分析一本不到4M的小说数据,在hadoop里运行,运行了将近一个半小时才运行完,但是指出来一个_temporary的文件,还报错,java代码是没有问题的,

运行hadoop的mapreduce任务出错!!!

在运行hadoop的mapreduce任务时,如果是小文件的话是能够执行成功,但是在运行大一点文件的时候就会出现以下的报错。网上找了很多资料都不能解决我的这个问题,不知道有哪位大神能够解答以下呢。 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) at org.apache.hadoop.ipc.Client.call(Client.java:1427) at org.apache.hadoop.ipc.Client.call(Client.java:1337) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733) at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)

本地eclipse运行 hadoop-2.6 mapreduce 报错,求助

报错信息是: 2016-02-26 11:24:07,722 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - session.id is deprecated. Instead, use dfs.metrics.session-id 2016-02-26 11:24:07,727 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId= 2016-02-26 11:24:08,081 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-02-26 11:24:08,091 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(252)) - Cleaning up the staging area file:/tmp/hadoop-fire/mapred/staging/fire1322517587/.staging/job_local1322517587_0001 2016-02-26 11:24:08,095 WARN [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1674)) - PriviledgedActionException as:fire (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/fire/dedup_in Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/fire/dedup_in at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:304) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:321) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:199) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325) at com.hebut.mr.Dedup.main(Dedup.java:135)

Hadoop安装插件运行MapReduce报错

我的虚拟机搭建了HDFS一个集群完全分布式,在win7系统下安装Eclipse插件,运行MapReduce程序报错,好像是连接不上,求大神帮忙![图片](https://img-ask.csdn.net/upload/201608/08/1470613451_63912.jpg)![图片](https://img-ask.csdn.net/upload/201608/08/1470613343_436152.jpg)

hbase mapreduce 报错 java.lang.NullPointerException

http://bbs.csdn.net/topics/390865764 这篇文章出错相似,求教大牛们 2017-09-15 23:19:15 [WARN]-[] Your hostname, admin-PC resolves to a loopback/non-reachable address: fe80:0:0:0:0:5efe:c0a8:164%23, but we couldn't find any external IP address! 2017-09-15 23:19:15 [INFO]-[org.apache.hadoop.conf.Configuration.deprecation] session.id is deprecated. Instead, use dfs.metrics.session-id 2017-09-15 23:19:15 [INFO]-[org.apache.hadoop.metrics.jvm.JvmMetrics] Initializing JVM Metrics with processName=JobTracker, sessionId= Exception in thread "main" java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010) at org.apache.hadoop.util.Shell.runCommand(Shell.java:487) at org.apache.hadoop.util.Shell.run(Shell.java:460) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:720) at org.apache.hadoop.util.Shell.execCommand(Shell.java:813) at org.apache.hadoop.util.Shell.execCommand(Shell.java:796) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:656) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:444) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:308) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:147) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325) at TestOnlyMapper.main(TestOnlyMapper.java:35) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) ----------------------分割线 代码------------------------------------------------- import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; import org.apache.hadoop.hbase.mapreduce.TableMapper; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat; import java.io.IOException; /** * Created by admin on 2017/9/15. */ public class TestOnlyMapper { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = HBaseConfiguration.create(); conf.set("hbase.rootdir","hdfs://hadoop.master:8020/hdfs/hbase"); conf.set("hbase.zookeeper.quorum","hadoop.master,hadoop.slave11,hadoop.slave12"); conf.set("hbase.zookeeper.property.clientPort","2181"); Job job= new Job(conf,"test"); job.setJarByClass(TestOnlyMapper.class); Scan scan = new Scan(); job.setMapSpeculativeExecution(false); job.setReduceSpeculativeExecution(false); TableMapReduceUtil.initTableMapperJob("test11",scan,OMapper.class,null,null,job); job.setOutputFormatClass(NullOutputFormat.class); job.waitForCompletion(true); } } class OMapper extends TableMapper<Text, LongWritable> { @Override protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException { for(Cell cell:value.listCells()) { System.out.println("---------------------"); System.out.println("cell.getQualifier()= "+cell.getQualifier().toString()); System.out.println("---------------------"); } } }

MapReduce中reduce函数不执行

准备自己写一个代码熟悉一下mapreduce,但是写好之后发现reduce函数不执行,运行程序也没有报错,逛了很多论坛都没有解决方案,因为是初步接触mapreduce,所以对mapreduce的编程不太了解,希望各位大大帮我看下代码有没有问题。 代码如下: Mapper: ``` package Utils; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class BayMapper extends Mapper<Object, Text, Cell, Text> { @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException{ StringTokenizer itr = new StringTokenizer(value.toString()); Cell[][] cells = new Cell[ClusterConfig.cellRow][ClusterConfig.cellColumn]; int cellx = 0; int celly = 0; for(int i = 0;i<ClusterConfig.cellRow;i++) for(int j = 0;j<ClusterConfig.cellColumn;j++){ cells[i][j] = new Cell(); } while(itr.hasMoreTokens()){ String outValue = new String(itr.nextToken()); System.out.println(outValue); String[] list = outValue.split(","); //list.length = 2; for(int i = 0;i<list.length;i++){ double x; double y; x = Double.valueOf(list[0]); y = Double.valueOf(list[1]); cellx = (int) Math.ceil((x - ClusterConfig.xmin) / ClusterConfig.intervalx); celly = (int) Math.ceil((y - ClusterConfig.ymin) / ClusterConfig.intervaly); //cells[cellx][celly].addnumberPoints(); //传入该格子中点的个数 } context.write(cells[cellx][celly],new Text(outValue)); } } } ``` Reducer: ``` package Utils; import java.io.IOException; import java.util.Iterator; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class BayReducer extends Reducer<Cell, Text, Cell, IntWritable> { @Override protected void reduce(Cell key,Iterable<Text> values,Context context) throws IOException, InterruptedException{ int count = 0; Iterator<Text> iterator = values.iterator(); while(iterator.hasNext()){ count ++; } if(count >= 20){ context.write(key,new IntWritable(count)); } } } ``` Driver: ``` package Cluster; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; import Utils.BayMapper; import Utils.BayReducer; import Utils.Cell; public class ClusterDriver { /** * @param args * @throws IOException * @throws InterruptedException * @throws ClassNotFoundException */ public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration(); conf.set("mapred.job.tracker", "localhost:9000"); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: Data Cluster <in> <out>"); System.exit(2); } @SuppressWarnings("deprecation") Job job = new Job(conf, "Baymax Cluster"); job.setJarByClass(ClusterDriver.class); job.setMapperClass(BayMapper.class); job.setReducerClass(BayReducer.class); job.setOutputKeyClass(Cell.class); job.setOutputValueClass(IntWritable.class); Path in = new Path(otherArgs[0]); Path out = new Path(otherArgs[1]); FileInputFormat.addInputPath(job, in);// 设置输入路径 FileOutputFormat.setOutputPath(job, out);// 设置输出路径 System.exit(job.waitForCompletion(true) ? 0 : 1); } } ```

Hadoop集群执行wordcount出现的一些报错信息

我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!

win7下通过eclipse提交MapReduce作业exit code143

2017-05-24 08:20:25,861 FATAL [IPC Server handler 4 on 34806] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1495633327030_0011_m_000002_2 - exited : java.io.IOException: Unable to initialize any output collector at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:412) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:439) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 2017-05-24 08:20:26,651 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1495633327030_0011_m_000000_2: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143

hadoop2.5.2 mapreduce作业失败

``` 16/06/14 03:26:45 INFO client.RMProxy: Connecting to ResourceManager at centos1/192.168.6.132:8032 16/06/14 03:26:47 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 16/06/14 03:26:47 INFO input.FileInputFormat: Total input paths to process : 1 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: number of splits:1 16/06/14 03:26:48 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1465885546873_0002 16/06/14 03:26:49 INFO impl.YarnClientImpl: Submitted application application_1465885546873_0002 16/06/14 03:26:49 INFO mapreduce.Job: The url to track the job: http://centos1:8088/proxy/application_1465885546873_0002/ 16/06/14 03:26:49 INFO mapreduce.Job: Running job: job_1465885546873_0002 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 running in uber mode : false 16/06/14 03:27:10 INFO mapreduce.Job: map 0% reduce 0% 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 failed with state FAILED due to: Application application_1465885546873_0002 failed 2 times due to Error launching appattempt_1465885546873_0002_000002. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:50334 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy32.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:118) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` 然后错误日志如下 ``` 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlaucher.AMLauncher: Setting up container Container: [ContainerId: container_1465885546873_0002_01_000001, NodeId: local:42709, NodeHttpAddress: local:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:42709 }, ] for AM appattempt_1465885546873_0002_000001 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1465885546873_0002_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-14 03:26:50,948 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:51,950 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:52,951 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:53,952 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:54,953 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:55,954 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:56,956 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:57,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:58,959 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,960 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,962 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error launching appattempt_1465885546873_0002_000001. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:42709 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ``` core-site.xml如下 ``` <configuration> <property> <name>ha.zookeeper.quorum</name> <value>centos1:2181,centos2:2181,centos3:2181</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop2.5</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> </configuration> ``` hdfs-site.xml如下 ``` <configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>centos1,centos2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos1</name> <value>centos1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos2</name> <value>centos2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos1</name> <value>centos1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos2</name> <value>centos2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://centos2:8485;centos3:8485;centos4:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop-data</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> ``` yarn-site.xml如下 ``` <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>centos1</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>centos1:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>centos1:8033</value> </property> </configuration> ``` mapred-site.xml如下 ``` <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> ``` slaves如下 ``` centos2 centos3 centos4 ``` hosts如下 ``` 127.0.0.1 local local.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.6.132 centos1 192.168.6.133 centos2 192.168.6.134 centos3 192.168.6.135 centos4 ```

windows eclipse 下开发hadoop mapreduce,报空指针异常。

用三台ubuntu系统的服务器,搭建了hadoop集群,然后在windows下 用eclipse开发mapreduce,能连上hadoop,也能显示hdsf上的文件。自己写了mapreduce程序,run as hadoop 的时候,报空指针异常,什么localjob 之类的错误,什么原因求指点, 将工程打成jar包在linux hadoop环境用命令行运行是没问题的。。

mapreduce计算的数据导入mysql出错,导入到本地都ok,哪位路过的大佬能帮忙看下

![报错信息](https://img-ask.csdn.net/upload/201909/29/1569747157_413167.png) Driver代码如下 package com.sky.cmcc.offlineComputeMR; import com.sky.cmcc.pojo.MFee; import com.sky.cmcc.pojo.RFee; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.db.DBConfiguration; import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; /** * Classname CmccDriver * Date 2019/9/29 11:08 * Created by Teddys * Description 负责加载配置,启动MR,写入数据到mysql */ public class CmccDriver { // 定义msyql的四项 private static String DriverClass="com.mysql.jdbc.Driver"; private static String url="jdbc:mysql://localhost:3306/bot?characterEncoding=UTF-8"; private static String username="root"; private static String password="123456"; public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { //1 加载配置文件和设定Job Configuration conf = new Configuration(); Job job = Job.getInstance(conf); //连接mysql DBConfiguration.configureDB(conf,DriverClass,url,username,password); //2 设置Job的加载路径 job.setJarByClass(CmccDriver.class); //3 指定job的mapper和reducer的类 job.setMapperClass(CmccMapper.class); job.setReducerClass(CmccReducer.class); //4 设置mqpper和最后的输出类型 job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(MFee.class); job.setOutputKeyClass(RFee.class); job.setOutputValueClass(NullWritable.class); //5 设置输入和输出的路径 FileInputFormat.setInputPaths(job,new Path(args[0])); //注意:输出路径为mysql DBOutputFormat.setOutput(job,"cmcc0513", "day","chargefee","shouldfee","orderCount","chargePayTime","chargeSuccessCount"); //6 提交任务,执行程序 boolean b = job.waitForCompletion(true); System.exit(b ? 0 : 1); System.out.println("b===="+b); } }

eclipse运行MapReduce程序出现找不到类

如果使用setJarByClass会出现找不到类,这个应该怎么解决呢? ![图片说明](https://img-ask.csdn.net/upload/201912/04/1575464809_363370.png)

mapreduce编程出现错误:Job running in uber mode : false?

eclipse编写mapreduce程序,实现统计多个目录下所有文件的文件行数总数。 代码如下: package ZhangBo; import java.io.IOException; //import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; //import org.apache.hadoop.mapreduce.lib.input.MultipleInputs; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; //import org.apache.hadoop.io.LongWritable; //import org.apache.hadoop.util.GenericOptionsParser; //import org.junit.Test; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; //import org.apache.log4j.MDC; //import org.apache.log4j.Logger; //import firstnight.testlog4jj; public class WordCount { //////log static Log logger = LogFactory.getLog(WordCount.class); public static class TokenizerMapper extends Mapper<LongWritable, Text, Text, IntWritable>{ //一行的起始位置 private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context ) throws IOException, InterruptedException { //进行分割 logger.debug("This is debug message."); logger.error("This is error message."); logger.info("ni hao "); context.write(word,one); } } //对输入的key value值进行处理 转换成新的 key value值 进行一系列框架自带的处理 排序 合并 分句等等 public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; //循环遍历 IntWritable for (IntWritable val : values) { sum += val.get();//get 转 int } result.set(sum); context.write(key, result); } } private static FileSystem hdfs; private static String args1; private static String args2; private static String args3; Configuration conf1 = new Configuration(); private static void showDir(FileStatus fs) throws Exception { Path path = fs.getPath(); if (fs.isDir()) { args2 = path.toString(); if (args2 != args3) args1 = args1 +","+args2; System.out.println("目录:" + path); FileStatus[] f = hdfs.listStatus(path); if (f.length > 0) { for (FileStatus file:f) { showDir(file); args3 = file.getPath().toString(); } } } else { System.out.println("文件:" + path); } System.out.println("目录之和: "+args1); } public void test() throws Exception { hdfs = FileSystem.get(conf1); FileStatus[] fs = hdfs.listStatus(new Path("hdfs://C15/zbo")); args1 = "hdfs://C15/zbo"; if (fs.length > 0) { for (FileStatus f : fs) { showDir(f); } } else { System.out.println("没有什么好遍历的...."); } } public static void main(String[] args) throws Exception { new WordCount().test(); Configuration conf = new Configuration(); ////////////////////// Job job = new Job(conf,"word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); FileInputFormat.addInputPaths(job,args1); FileOutputFormat.setOutputPath(job, new Path("outbo")); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); System.exit(job.waitForCompletion(true)?0:1); } } 在目录hdfs://C15/zbo下创建目录和文件 在eclipse上打jar包ho.jar 执行程序 :hadoop jar ZhangBo.WordCount 执行结果如下所示: [root@c15m1 /]# hadoop jar ho.jar ZhangBo.WordCount 文件:hdfs://C15/zbo/file1.txt 目录之和: hdfs://C15/zbo 文件:hdfs://C15/zbo/file2.txt 目录之和: hdfs://C15/zbo 目录:hdfs://C15/zbo/zbo 文件:hdfs://C15/zbo/zbo/file1.txt 目录之和: hdfs://C15/zbo,hdfs://C15/zbo/zbo 文件:hdfs://C15/zbo/zbo/file2.txt 目录之和: hdfs://C15/zbo,hdfs://C15/zbo/zbo 目录:hdfs://C15/zbo/zbo/zbo 文件:hdfs://C15/zbo/zbo/zbo/file1.txt 目录之和: hdfs://C15/zbo,hdfs://C15/zbo/zbo,hdfs://C15/zbo/zbo/zbo 文件:hdfs://C15/zbo/zbo/zbo/file2.txt 目录之和: hdfs://C15/zbo,hdfs://C15/zbo/zbo,hdfs://C15/zbo/zbo/zbo 目录之和: hdfs://C15/zbo,hdfs://C15/zbo/zbo,hdfs://C15/zbo/zbo/zbo 目录之和: hdfs://C15/zbo,hdfs://C15/zbo/zbo,hdfs://C15/zbo/zbo/zbo 15/11/17 08:55:37 INFO client.RMProxy: Connecting to ResourceManager at c15m1.ecld.com/132.121.94.213:8050 15/11/17 08:55:38 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 72820725 for hdfs on ha-hdfs:C15 15/11/17 08:55:38 INFO security.TokenCache: Got dt for hdfs://C15; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:C15, Ident: (HDFS_DELEGATION_TOKEN token 72820725 for hdfs) 15/11/17 08:55:38 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 15/11/17 08:55:38 INFO input.FileInputFormat: Total input paths to process : 8 15/11/17 08:55:38 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library 15/11/17 08:55:38 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev dbd51f0fb61f5347228a7a23fe0765ac1242fcdf] 15/11/17 08:55:47 INFO mapreduce.JobSubmitter: number of splits:8 15/11/17 08:55:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1440641224751_187144 15/11/17 08:55:48 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:C15, Ident: (HDFS_DELEGATION_TOKEN token 72820725 for hdfs) 15/11/17 08:55:48 INFO impl.YarnClientImpl: Submitted application application_1440641224751_187144 15/11/17 08:55:49 INFO mapreduce.Job: The url to track the job: http://c15m1.ecld.com:8088/proxy/application_1440641224751_187144/ 15/11/17 08:55:49 INFO mapreduce.Job: Running job: job_1440641224751_187144 15/11/17 08:55:51 INFO mapreduce.Job: Job job_1440641224751_187144 running in uber mode : false 15/11/17 08:55:51 INFO mapreduce.Job: map 0% reduce 0% 15/11/17 08:55:51 INFO mapreduce.Job: Job job_1440641224751_187144 failed with state FAILED due to: Application application_1440641224751_187144 failed 2 times due to AM Container for appattempt_1440641224751_187144_000002 exited with exitCode: -1000 due to: Application application_1440641224751_187144 initialization failed (exitCode=255) with output: Requested user hdfs is banned .Failing this attempt.. Failing the application. 15/11/17 08:55:51 INFO mapreduce.Job: Counters: 0 问题:running in uber mode : false是什么问题?

MapReducer 写入到数据库 报错

## 【 DBUserWritable 类 】 package org.neworigin.com.Database; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.mapreduce.lib.db.DBWritable; public class DBUserWritable implements DBWritable,WritableComparable{ private String name=""; private String sex=""; private int age=0; private int num=0; private String department=""; private String tables=""; @Override public String toString() { return "DBUserWritable [name=" + name + ", sex=" + sex + ", age=" + age + ", department=" + department + "]"; } public DBUserWritable(DBUserWritable d){ this.name=d.getName(); this.sex=d.getSex(); this.age=d.getAge(); this.num=d.getNum(); this.department=d.getDepartment(); this.tables=d.getTables(); } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getSex() { return sex; } public void setSex(String sex) { this.sex = sex; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public int getNum() { return num; } public void setNum(int num) { this.num = num; } public String getDepartment() { return department; } public void setDepartment(String department) { this.department = department; } public String getTables() { return tables; } public void setTables(String tables) { this.tables = tables; } public DBUserWritable(String name, String sex, int age, int num, String department, String tables) { super(); this.name = name; this.sex = sex; this.age = age; this.num = num; this.department = department; this.tables = tables; } public DBUserWritable() { super(); // TODO Auto-generated constructor stub } public void write(DataOutput out) throws IOException { // TODO Auto-generated method stub out.writeUTF(name); out.writeUTF(sex); out.writeInt(age); out.writeInt(num); out.writeUTF(department); out.writeUTF(tables); } public void readFields(DataInput in) throws IOException { // TODO Auto-generated method stub name = in.readUTF(); sex=in.readUTF(); age=in.readInt(); num=in.readInt(); department=in.readUTF(); tables=in.readUTF(); } public int compareTo(Object o) { // TODO Auto-generated method stub return 0; } public void write(PreparedStatement statement) throws SQLException { // TODO Auto-generated method stub statement.setString(1, this.getName()); statement.setString(2, this.getSex()); statement.setInt(3, this.getAge()); statement.setString(4, this.getDepartment()); } public void readFields(ResultSet resultSet) throws SQLException { // TODO Auto-generated method stub this.name=resultSet.getString(1); this.sex=resultSet.getString(2); this.age=resultSet.getInt(3); this.department=resultSet.getString(4); } } ## 【mapper】 package org.neworigin.com.Database; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class UserDBMapper extends Mapper<LongWritable, Text, Text, DBUserWritable> { DBUserWritable DBuser= new DBUserWritable(); @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, DBUserWritable>.Context context) throws IOException, InterruptedException { String[] values=value.toString().split(" "); if(values.length==4){ DBuser.setName(values[0]); DBuser.setSex(values[1]); DBuser.setAge(Integer.parseInt(values[2])); DBuser.setNum(Integer.parseInt(values[3])); DBuser.setTables("t1"); System.out.println("mapper---t1---------------"+DBuser); context.write(new Text(values[3]),DBuser); } if(values.length==2){ DBuser.setNum(Integer.parseInt(values[0])); DBuser.setDepartment(values[1]); DBuser.setTables("t2"); context.write(new Text(values[0]),DBuser); //System.out.println("mapper --t2"+"--"+values[0]+"----"+DBuser); } } } ## 【reducer 】 package org.neworigin.com.Database; import java.io.IOException; import java.util.LinkedList; import java.util.List; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class UserDBReducer extends Reducer<Text, DBUserWritable,NullWritable,DBUserWritable> { // public DBUserWritable db= new DBUserWritable(); @Override protected void reduce(Text k2, Iterable<DBUserWritable> v2, Reducer<Text, DBUserWritable, NullWritable,DBUserWritable>.Context context) throws IOException, InterruptedException { String Name=""; List<DBUserWritable> list=new LinkedList<DBUserWritable>(); for(DBUserWritable val : v2){ list.add(new DBUserWritable(val));//new 一个对象 给list // System.out.println("[table]"+val.getTables()+"----key"+k2+"---"+val); if(val.getTables().equals("t2")){ Name=val.getDepartment(); } } //键是 num for(DBUserWritable join : list){ System.out.println("[table]"+join.getTables()+"----key"+k2+"---"+join); if(join.getTables().equals("t1")){ join.setDepartment(Name); System.out.println("db-----"+join); context.write(NullWritable.get(), join); } } } } ## 【app】 package org.neworigin.com.Database; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.db.DBConfiguration; import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class UserDBAPP { public static void main(String[] args) throws Exception, URISyntaxException { // TODO Auto-generated method stub String INPUT_PATH="file:///E:/BigData_eclipse_database/Database/data/table1"; String INPUT_PATH1="file:///E:/BigData_eclipse_database/Database/data/table2"; // String OUTPUT_PARH="file:///E:/BigData_eclipse_database/Database/data/output"; Configuration conf = new Configuration(); // FileSystem fs=FileSystem.get(new URI(OUTPUT_PARH),conf); // if(fs.exists(new Path(OUTPUT_PARH))){ // fs.delete(new Path(OUTPUT_PARH)); // } Job job = new Job(conf,"mydb"); //设置数据库配置 DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://localhost/hadoop", "root", "123456"); FileInputFormat.addInputPaths(job,INPUT_PATH); FileInputFormat.addInputPaths(job,INPUT_PATH1); job.setMapperClass(UserDBMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(DBUserWritable.class); job.setReducerClass(UserDBReducer.class); job.setOutputKeyClass(NullWritable.class); job.setOutputValueClass(DBUserWritable.class); // FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PARH)); //设置输出路径 DBOutputFormat.setOutput(job,"user_tables", "name","sex","age","department"); job.setOutputFormatClass(DBOutputFormat.class); boolean re = job.waitForCompletion(true); System.out.println(re); } } 【报错】ps 表链接 ,写到本地没问题 写到数据库 就报错; 17/11/10 11:39:11 WARN output.FileOutputCommitter: Output Path is null in cleanupJob() 17/11/10 11:39:11 WARN mapred.LocalJobRunner: job_local1812680657_0001 java.lang.Exception: java.io.IOException at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) Caused by: java.io.IOException at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:541) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 running in uber mode : false 17/11/10 11:39:12 INFO mapreduce.Job: map 100% reduce 0% 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 failed with state FAILED due to: NA 17/11/10 11:39:12 INFO mapreduce.Job: Counters: 35

mapreduce 实现单词的个数统计

数据形式: 第一行 ABCDE 第二行 CDEF 利用mapreduce统计出现单词的个数(ABCDEF一共6个单词)

java mapreduce wordcount 统计出现次数

一天时间内,每个小时内该网页有多少次访问记录,这么多访问记录中有多少个用户,如下格式: hourid url 0 com 0 com 0 cn 0 net 输出格式类似下面: hourid visitscount userscount 0 4 3 如果用wordcount的那种实现方法的话,都是根据key来直接累加value的,实在不知道该怎么弄,还请大神支招 有几条访问记录,visitscount就是几;userscount指的是(com,cn,net) 最后能输出是csv格式的文档就更好了

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

点沙成金:英特尔芯片制造全过程揭密

“亚马逊丛林里的蝴蝶扇动几下翅膀就可能引起两周后美国德州的一次飓风……” 这句人人皆知的话最初用来描述非线性系统中微小参数的变化所引起的系统极大变化。 而在更长的时间尺度内,我们所生活的这个世界就是这样一个异常复杂的非线性系统…… 水泥、穹顶、透视——关于时间与技艺的蝴蝶效应 公元前3000年,古埃及人将尼罗河中挖出的泥浆与纳特龙盐湖中的矿物盐混合,再掺入煅烧石灰石制成的石灰,由此得来了人...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

你打算用Java 8一辈子都不打算升级到Java 14,真香

我们程序员应该抱着尝鲜、猎奇的心态,否则就容易固步自封,技术停滞不前。

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《经典算法案例》01-08:如何使用质数设计扫雷(Minesweeper)游戏

我们都玩过Windows操作系统中的经典游戏扫雷(Minesweeper),如果把质数当作一颗雷,那么,表格中红色的数字哪些是雷(质数)?您能找出多少个呢?文中用列表的方式罗列了10000以内的自然数、质数(素数),6的倍数等,方便大家观察质数的分布规律及特性,以便对算法求解有指导意义。另外,判断质数是初学算法,理解算法重要性的一个非常好的案例。

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

一文带你入门Java Stream流,太强了

两个星期以前,就有读者强烈要求我写一篇 Java Stream 流的文章,我说市面上不是已经有很多了吗,结果你猜他怎么说:“就想看你写的啊!”你看你看,多么苍白的喜欢啊。那就“勉为其难”写一篇吧,嘻嘻。 单从“Stream”这个单词上来看,它似乎和 java.io 包下的 InputStream 和 OutputStream 有些关系。实际上呢,没毛关系。Java 8 新增的 Stream 是为...

立即提问
相关内容推荐