【新手】Hadoop MapReduce 执行中Map没有输出

hadoop - hadoop2.6 分布式 - 简单实例学习 - 统计某年的最高温度和按年份将温度从高到底排序 - 原明卓 - 博客频道 - CSDN.NET http://blog.csdn.net/lablenet/article/details/50608197#java

我按照这篇博客做的,运行结果见图。

16/10/19 05:27:51 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
16/10/19 05:27:52 INFO input.FileInputFormat: Total input paths to process : 1
16/10/19 05:27:52 INFO util.NativeCodeLoader: Loaded the native-hadoop library
16/10/19 05:27:52 WARN snappy.LoadSnappy: Snappy native library not loaded
16/10/19 05:27:54 INFO mapred.JobClient: Running job: job_201610190234_0013
16/10/19 05:27:55 INFO mapred.JobClient: map 0% reduce 0%
16/10/19 05:28:24 INFO mapred.JobClient: map 100% reduce 0%
16/10/19 05:28:41 INFO mapred.JobClient: map 100% reduce 20%
16/10/19 05:28:42 INFO mapred.JobClient: map 100% reduce 40%
16/10/19 05:28:50 INFO mapred.JobClient: map 100% reduce 46%
16/10/19 05:28:51 INFO mapred.JobClient: map 100% reduce 60%
16/10/19 05:29:01 INFO mapred.JobClient: map 100% reduce 100%
16/10/19 05:29:01 INFO mapred.JobClient: Job complete: job_201610190234_0013
16/10/19 05:29:01 INFO mapred.JobClient: Counters: 28
16/10/19 05:29:01 INFO mapred.JobClient: Job Counters
16/10/19 05:29:01 INFO mapred.JobClient: Launched reduce tasks=6
16/10/19 05:29:01 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=26528
16/10/19 05:29:01 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
16/10/19 05:29:01 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
16/10/19 05:29:01 INFO mapred.JobClient: Launched map tasks=1
16/10/19 05:29:01 INFO mapred.JobClient: Data-local map tasks=1
16/10/19 05:29:01 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=107381
16/10/19 05:29:01 INFO mapred.JobClient: File Output Format Counters
16/10/19 05:29:01 INFO mapred.JobClient: Bytes Written=0
16/10/19 05:29:01 INFO mapred.JobClient: FileSystemCounters
16/10/19 05:29:01 INFO mapred.JobClient: FILE_BYTES_READ=30
16/10/19 05:29:01 INFO mapred.JobClient: HDFS_BYTES_READ=1393
16/10/19 05:29:01 INFO mapred.JobClient: FILE_BYTES_WRITTEN=354256
16/10/19 05:29:01 INFO mapred.JobClient: File Input Format Counters
16/10/19 05:29:01 INFO mapred.JobClient: Bytes Read=1283
16/10/19 05:29:01 INFO mapred.JobClient: Map-Reduce Framework
16/10/19 05:29:01 INFO mapred.JobClient: Map output materialized bytes=30
16/10/19 05:29:01 INFO mapred.JobClient: Map input records=46
16/10/19 05:29:01 INFO mapred.JobClient: Reduce shuffle bytes=30
16/10/19 05:29:01 INFO mapred.JobClient: Spilled Records=0
16/10/19 05:29:01 INFO mapred.JobClient: Map output bytes=0
16/10/19 05:29:01 INFO mapred.JobClient: CPU time spent (ms)=16910
16/10/19 05:29:01 INFO mapred.JobClient: Total committed heap usage (bytes)=195301376
16/10/19 05:29:01 INFO mapred.JobClient: Combine input records=0
16/10/19 05:29:01 INFO mapred.JobClient: SPLIT_RAW_BYTES=110
16/10/19 05:29:01 INFO mapred.JobClient: Reduce input records=0
16/10/19 05:29:01 INFO mapred.JobClient: Reduce input groups=0
16/10/19 05:29:01 INFO mapred.JobClient: Combine output records=0
16/10/19 05:29:01 INFO mapred.JobClient: Physical memory (bytes) snapshot=331567104
16/10/19 05:29:01 INFO mapred.JobClient: Reduce output records=0
16/10/19 05:29:01 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2264113152

16/10/19 05:29:01 INFO mapred.JobClient: Map output records=0

yyyy-MM-dd HH:mm:ss\t温度
example:1995-10-10 10:10:10 6.54
这是数据源格式,我把
RunJob中的
int year=c.get(1);

String hot=ss[1].substring(0,ss[1].lastIndexOf("°C"));

KeyPari keyPari=new KeyPari();

keyPari.setYear(year);

中的°C改成了\n。


代码和博文的一样,只删掉了MAP里面的IF判断和修改了输入输出路径。求前辈们指教一下为什么会这样,深表感激。

2个回答

yyyy-MM-dd HH:mm:ss\t温度
example:1995-10-10 10:10:10 6.54
这是数据源格式,我把
RunJob中的
int year=c.get(1);

String hot=ss[1].substring(0,ss[1].lastIndexOf("°C"));

KeyPari keyPari=new KeyPari();

keyPari.setYear(year);

中的°C改成了\n。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
QuartzJob 调用 hadoop mapreduce 报错
Error: java.io.IOException: com.mysql.jdbc.Driver at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:540) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Hadoop mapreduce传值问题
最近mapreduce编写遇到了问题。在step4中,reduce可以同时收到从map中传来的A和B两组数据。但是在step5中的reudce却无法同时收到A、B两组数据,出现了有A没B,有B没A的现象,即A和B无法在同一次循环中出现。 step5,我几乎是从step4复制过来的,很奇怪他们的执行步骤为什么不一样。 step4 ``` import java.io.IOException; import java.util.HashMap; import java.util.Iterator; import java.util.Map; import java.util.regex.Pattern; import org.apache.commons.net.telnet.EchoOptionHandler; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogWriter; //同现矩阵和用户偏好矩阵相乘 public class Step4 { public static boolean run(Configuration con, Map<String, String>map) { try { FileSystem fs = FileSystem.get(con); Job job = Job.getInstance(); job.setJobName("step4"); job.setJarByClass(App.class); job.setMapperClass(Step4_Mapper.class); job.setReducerClass(Step4_Reducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(Text.class); FileInputFormat.setInputPaths(job, new Path[] { new Path(map.get("Step4Input1")), new Path(map.get("Step4Input2")) }); Path outpath = new Path(map.get("Step4Output")); if(fs.exists(outpath)){ fs.delete(outpath,true); } FileOutputFormat.setOutputPath(job, outpath); boolean f = job.waitForCompletion(true); return f; }catch(Exception e) { e.printStackTrace(); } return false; } static class Step4_Mapper extends Mapper<LongWritable, Text, Text, Text>{ private String flag; //每次map时都会先判断一次 @Override protected void setup(Context context )throws IOException,InterruptedException{ FileSplit split = (FileSplit) context.getInputSplit(); flag = split.getPath().getParent().getName(); System.out.print(flag+ "*************************"); } @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{ String[] tokens = Pattern.compile("[\t,]").split(value.toString()); //物品共现矩阵 if(flag.equals("step3")) { // i2:i3 1 // i2:i2 2 String[] v1 = tokens[0].split(":"); String itemID1 = v1[0]; String itemID2 = v1[1]; String num = tokens[1]; Text k = new Text(itemID1); Text v = new Text("A:"+itemID2+","+num); //A:i2,1 context.write(k,v); }else if(flag.equals("step2")) {//用户评价矩阵 // u2 i1:2,i3:4 String userID = tokens[0]; for(int i=1;i<tokens.length;i++) { String[] vector = tokens[i].split(":"); String itemID = vector[0]; //物品ID String pref = vector[1];//评分 Text k = new Text(itemID); Text v = new Text("B:"+userID+","+pref); context.write(k, v); } } } } static class Step4_Reducer extends Reducer<Text, Text, Text, Text>{ @Override protected void reduce(Text key, Iterable<Text>values, Context context) throws IOException,InterruptedException{ //A为同现矩阵,B为用户偏好矩阵 //某一个物品k,针对它和其他所有物品的同现次数v,都在mapA集合中 // Text k = new Text(itemID1); //Text v = new Text("A:"+itemID2+","+num); //A:i2,1 // context.write(k,v); //和该物品(key中的itemID)同现的其他物品的同现集合 //其他物品ID为map的key,同现数字为值 Map<String, Integer> mapA = new HashMap<String,Integer>(); //该物品(key中的itemID),所有用户的推荐权重分数 Map<String, Integer>mapB = new HashMap<String,Integer>(); for(Text line:values) { String val = line.toString(); if(val.startsWith("A:")) { String[] kv = Pattern.compile("[\t,]").split(val.substring(2)); try { mapA.put(kv[0], Integer.parseInt(kv[1])); }catch(Exception e) { e.printStackTrace(); } }else if(val.startsWith("B:")) { String[] kv = Pattern.compile("[\t,]").split(val.substring(2)); try { mapB.put(kv[0], Integer.parseInt(kv[1])); }catch(Exception e) { e.printStackTrace(); } } } double result = 0; Iterator<String>iter = mapA.keySet().iterator(); while(iter.hasNext()) { String mapk = iter.next(); //itemID int num =mapA.get(mapk).intValue(); // 获取同现值 Iterator<String>iterb = mapB.keySet().iterator(); while(iterb.hasNext()) { String mapkb = iterb.next(); int pref = mapB.get(mapkb).intValue(); result = num*pref; Text k = new Text(mapkb); Text v = new Text(mapk+ "," + result); context.write(k, v); } } } } } ``` step5 ``` import java.io.IOException; import java.util.HashMap; import java.util.Iterator; import java.util.Map; import java.util.regex.Pattern; import org.apache.commons.net.telnet.EchoOptionHandler; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.Mapper.Context; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogWriter; //获得结果矩阵 public class Step5 { public static boolean run(Configuration con, Map<String, String>map) { try { FileSystem fs = FileSystem.get(con); Job job = Job.getInstance(); job.setJobName("step5"); job.setJarByClass(App.class); job.setMapperClass(Step5_Mapper.class); job.setReducerClass(Step5_Reducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(Text.class); FileInputFormat.setInputPaths(job, new Path[] { new Path(map.get("Step5Input1")), new Path(map.get("Step5Input2")) }); Path outpath = new Path(map.get("Step5Output")); if(fs.exists(outpath)){ fs.delete(outpath,true); } FileOutputFormat.setOutputPath(job, outpath); boolean f = job.waitForCompletion(true); return f; }catch(Exception e) { e.printStackTrace(); } return false; } static class Step5_Mapper extends Mapper<LongWritable, Text, Text, Text>{ private String flag; //每次map时都会先判断一次 @Override protected void setup(Context context )throws IOException,InterruptedException{ FileSplit split = (FileSplit) context.getInputSplit(); flag = split.getPath().getParent().getName(); System.out.print(flag+ "*************************"); } @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{ String[] tokens = Pattern.compile("[\t,]").split(value.toString()); if(flag.equals("step4")) { // i2:i3 1 // i2:i2 2 Text k = new Text(tokens[0]); Text v = new Text("A:"+tokens[1]+","+tokens[2]); context.write(k, v); }else if(flag.equals("step2")) {//用户评价矩阵 // u2 i1:2,i3:4 String userID = tokens[0]; for(int i=1;i<tokens.length;i++) { String[] vector = tokens[i].split(":"); String itemID = vector[0]; //物品ID String pref = vector[1];//评分 Text k = new Text(itemID); Text v = new Text("B:"+userID+","+pref); context.write(k, v); } } } } //本reduce 负责累加结果 static class Step5_Reducer extends Reducer<Text, Text, Text, Text>{ protected void reduce(Text key, Iterable<Text>values, Context context) throws IOException,InterruptedException{ //其他物品ID为map的key,同现数字为值 Map<String, Double> mapA = new HashMap<String,Double>(); //该物品(key中的itemID),所有用户的推荐权重分数 Map<String, Integer>mapB = new HashMap<String,Integer>(); for(Text line : values) { String val = line.toString(); if(val.startsWith("A:")) { String[] kv = Pattern.compile("[\t,]").split(val.substring(2)); String tokens = kv[1]; String itemID = kv[0];//物品id Double score = Double.parseDouble(tokens); //相乘结果 //相加计算 if(mapA.containsKey(itemID)) { mapA.put(itemID, mapA.get(itemID)+score); }else { mapA.put(itemID, score); } }else if(val.startsWith("B:")) { String[] kv = Pattern.compile("[\t,]").split(val.substring(2)); try { mapB.put(kv[0], Integer.parseInt(kv[1])); }catch(Exception e) { e.printStackTrace(); } } } Iterator<String> iter = mapA.keySet().iterator(); while(iter.hasNext()) { String itemID = iter.next(); double score = mapA.get(itemID); Text v = new Text(itemID+","+score); Iterator<String>iterb = mapB.keySet().iterator(); while(iterb.hasNext()) { String mapkb = iterb.next(); Text k = new Text(mapkb); if(k.equals(key)) { continue; }else { context.write(key, v); } } } } } } ``` step4和step5配置 ![图片说明](https://img-ask.csdn.net/upload/201804/25/1524617462_994374.png) step4,在for循环中同时出现A和B ![step4,在for循环中同时出现A和B](https://img-ask.csdn.net/upload/201804/25/1524616391_511813.png) step5中,A和B无法出现在同一次循环 ![有A没B,此时mapB是无法点击开的](https://img-ask.csdn.net/upload/201804/25/1524616746_557066.png) 直接跳出了for循环进入下面的while循环,此时没有mapB,while无法正常进行 ![跳出了for循环](https://img-ask.csdn.net/upload/201804/25/1524616866_908151.png) 进行了多次step5后,输出完所有mapA之后,在下一次step5才进入mapB,此时轮到mapA是空的,而只有mapB ![mapA是空的,只有mapB](https://img-ask.csdn.net/upload/201804/25/1524617121_817431.png)
在eclipse运行hadoop mapreduce例子报错
在终端运行hadoop带的例子正常,hadoop节点正常,错误如下 17/09/05 20:20:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/09/05 20:20:16 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 17/09/05 20:20:16 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= Exception in thread "main" java.net.ConnectException: Call From master/192.168.1.110 to localhost:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1479) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426) at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at mapreduce.Temperature.main(Temperature.java:202) Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528) at org.apache.hadoop.ipc.Client.call(Client.java:1451) ... 28 more
eclipse运行hadoop mapreduce程序如下错误
2017-09-06 15:48:42,677 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1460)) - Starting flush of map output 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1482)) - Spilling map output 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1483)) - bufstart = 0; bufend = 108; bufvoid = 104857600 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1485)) - kvstart = 26214396(104857584); kvend = 26214352(104857408); length = 45/6553600 2017-09-06 15:48:42,733 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1667)) - Finished spill 0 2017-09-06 15:48:42,743 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1038)) - Task:attempt_local1469942249_0001_m_000000_0 is done. And is in the process of committing 2017-09-06 15:48:42,751 INFO [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete. 2017-09-06 15:48:42,783 WARN [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:run(560)) - job_local1469942249_0001 java.lang.Exception: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()J at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()J at org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:872) at org.apache.hadoop.mapred.Task.updateCounters(Task.java:1021) at org.apache.hadoop.mapred.Task.done(Task.java:1040) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-09-06 15:48:43,333 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1360)) - Job job_local1469942249_0001 running in uber mode : false 2017-09-06 15:48:43,335 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 0% reduce 0% 2017-09-06 15:48:43,337 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Job job_local1469942249_0001 failed with state FAILED due to: NA 2017-09-06 15:48:43,352 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1385)) - Counters: 10 Map-Reduce Framework Map input records=12 Map output records=12 Map output bytes=108 Map output materialized bytes=0 Input split bytes=104 Combine input records=0 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 File Input Format Counters Bytes Read=132 Finished
windows eclipse 下开发hadoop mapreduce,报空指针异常。
用三台ubuntu系统的服务器,搭建了hadoop集群,然后在windows下 用eclipse开发mapreduce,能连上hadoop,也能显示hdsf上的文件。自己写了mapreduce程序,run as hadoop 的时候,报空指针异常,什么localjob 之类的错误,什么原因求指点, 将工程打成jar包在linux hadoop环境用命令行运行是没问题的。。
hadoop mapreduce 数据分析 丢数据
最近发现hadoop的mapreduce程序会丢数据,不知道是什么原因,请教各位: hadoop环境,通过mapreduce程序分析hdfs上的数据,一天的数据是按小时存储的,每一个小时一个文件价,数据格式都是一样的,现在如果在16点这个文件价里有一条数据a,如果我用mr分析一整天的数据,数据a则丢失,如果单独跑16点这个文件夹里的数据,则数据a不会丢失,可以正常被分析出来,只要一加上其他时间段的数据,数据a就分析不出来,请问这是为什么? 最近在学习spark,我用spark程序跑同样的数据,整天的,不会有丢失的问题,的所以我肯定不是数据格式的问题 希望大家能帮我解决这个hadoop的问题,谢谢啦
hadoop mapreduce报错
java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy31.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy32.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448) Job Submission failed with exception 'java.lang.RuntimeException(Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive-root/root/935624e0-aea4-47d6-842c-32d42d506d4b/hive_2017-02-16_04-42-39_689_6740522155632742535-1/-mr-10004/7b69d4eb-6fe2-4c55-a6cd-ba4dcd5c2054/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
hadoop MapReduce 路径输入
MapReduce程序中要处理的文件在一个文件夹及它的子文件夹中,用什么方法可以处理这种情况,让所有的文件都能被处理
hadoop mapreduce 能帮我解决什么问题?
# 求救。。。 在mapreduce下,能够进行一次map后,对map的结果进行多次reduce呢?
eclipse 开发 hadoop mapreduce的方式
我是在windows下用eclipse下开发mapreduce 然后打成jar包,在linux下,用命令运行jar,这种方式好吗,有其他方式吗?
hadoop mapreduce 在编写好的程序下 运行程序出现错误,求错误所在
15/09/01 10:05:06 INFO mapred.JobClient: map 0% reduce 0% 15/09/01 10:05:22 INFO mapred.JobClient: Task Id : attempt_201509011003_0001_m_000002_0, Status : FAILED java.util.NoSuchElementException at java.util.StringTokenizer.nextToken(StringTokenizer.java:332) at com.hebut.mr.Score$Map.map(Score.java:37) at com.hebut.mr.Score$Map.map(Score.java:1) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.Child.main(Child.java:249) 15/09/01 10:05:25 WARN mapred.JobClient: Error reading task outputSlave1.hadoop 15/09/01 10:05:25 WARN mapred.JobClient: Error reading task outputSlave1.hadoop 15/09/01 10:05:26 INFO mapred.JobClient: Task Id : attempt_201509011003_0001_m_000000_0, Status : FAILED java.util.NoSuchElementException at java.util.StringTokenizer.nextToken(StringTokenizer.java:332) at com.hebut.mr.Score$Map.map(Score.java:37) at com.hebut.mr.Score$Map.map(Score.java:1) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.Child.main(Child.java:249)
cdh hadoop mapreduce 运行时的问题:(有时候会出现,有时候不出现,急求大神帮助)
15/10/08 08:49:13 INFO mapreduce.Job: Job job_1419225162729_18465 running in uber mode : false 15/10/08 08:49:13 INFO mapreduce.Job: map 0% reduce 0% 15/10/08 08:49:13 INFO mapreduce.Job: Job job_1419225162729_18465 failed with state FAILED due to: Application application_1419225162729_18465 failed 1 times due to AM Container for appattempt_1419225162729_18465_000001 exited with exitCode: -1000 due to: java.io.IOException: Not able to initialize app-log directories in any of the configured local directories for app application_1419225162729_18465 at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.createAppLogDirs(DefaultContainerExecutor.java:459) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:91) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:861) .Failing this attempt.. Failing the application. 15/10/08 08:49:13 INFO mapreduce.Job: Counters: 0 Moved: 'hdfs://oiddhnode02:8020/user/nmger/worktemp/2015100408' to trash at: hdfs://oiddhnode02:8020/user/nmger/.Trash/Current
hadoop mapreduce 统计所有的key-value中value为空的数目
遇到一个问题,望大家有空可以帮忙看看,小弟感激不尽! 现在在做信令解析,有一条信令:[2016-04-02 09:58:09,724] len:78;type:1002;msc:0E1F;bsc:3F17;time:2016-04-01 16:48:46.494;lac:13883;ci:8713;imsi:460004544938252;msisdn:13994482976;callType:0;disLen:11;disMsisdn:13503531697;remark:0; map0: 输入:一条信令(上面那一条) 通过解析,得到一个Object 输出:key:imsi value:{imsi:460004544938252,msisdn:13994482976} reduce0: 输入:key:imsi value:List<> {imsi:460004544938252,msisdn:''},{imsi:460004544938252,msisdn:13994482976},{imsi:460004544938252,msisdn:''} 在这里我想统计一下: 1、所有的msisdn为空的个数 2、imsi对应的msisdn为空的个数(List里的msisdn全为空计1) 现在问题是:要把统计的值(一行:日期 | 呼叫总量 | IMSI未解析总呼叫次数 | 占比1 | 总IMSI | 未解析总IMSI数 | 占比2)写到文件中,怎么判断我读的是最后一条信令呢?如果不行的话,把这个reduce的结果传给第二个mapreduce统计怎么实现呢? 希望有空的朋友多多指点!谢谢
hadoop一个mapreduce的JOB最短执行时间
如题,我想用hadoop来进行文本检索,想法是一个查询对应一个JOB,检索的话肯定时间要快。 但是我在eclipse中跑一个JOB时,即使是什么都不做,也需要7秒,用hadoop jar命令更久。 请问这个时间可以优化吗,还是Mapreduce初始JOB就需要这么久。还有一个奇怪的现象: JOB如果遍历文本集合来进行检索,竟然只需要6秒多,比什么都不干还快。
运行mapredurce出现Method threw 'java.lang.IllegalStateException' exception. Cannot evaluate org.apache.hadoop.mapreduce.Job.toString()
执行下述代码后在,创建job后会有上述异常,但是可以执行到最后,但是job没有提交上去执行,在历史里也看不到有执行记录求帮助新手o(╥﹏╥)o。 package MapReducer; import com.sun.org.apache.bcel.internal.generic.RETURN; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import java.io.File; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.util.StringTokenizer; /** * @Describe MapReducer第一个读取文档并计数 * @Author zhanglei * @Date 2019/11/18 22:53 **/ public class WordCountApp extends Configured implements Tool { public int run(String[] strings) throws Exception { String input_path="hdfs://192.168.91.130:8020/data/wc.txt"; String output_path="hdfs://192.168.91.130:8020/data/outputwc"; Configuration configuration = getConf(); final FileSystem fileSystem = FileSystem.get(new URI(input_path),configuration); if(fileSystem.exists(new Path(output_path))){ fileSystem.delete(new Path(output_path),true); } Job job = Job.getInstance(configuration,"WordCountApp"); job.setJarByClass(WordCountApp.class); job.setMapperClass(WordCountMapper.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setReducerClass(WordCountReducer.class); job.setInputFormatClass(TextInputFormat.class); Path inpath = new Path(input_path); FileInputFormat.addInputPath(job,inpath); job.setOutputFormatClass(TextOutputFormat.class); Path outpath = new Path(output_path); FileOutputFormat.setOutputPath(job,outpath); return job.waitForCompletion(true) ? 0:1; } //继承 public static class WordCountMapper extends Mapper<Object,Text,Text,IntWritable>{ private final static IntWritable one= new IntWritable(1); private Text word = new Text(); public void map(Object key,Text value,Context context) throws IOException, InterruptedException { Text t = value; StringTokenizer itr = new StringTokenizer(value.toString()); while(itr.hasMoreTokens()){ word.set(itr.nextToken()); context.write(word,one); } } } public static class WordCountReducer extends Reducer<Object,Text,Text,IntWritable>{ private final static IntWritable res= new IntWritable(1); public void reduce(Text key,Iterable<IntWritable> values,Context context) throws IOException, InterruptedException { int sum = 0; for(IntWritable val:values){ sum+=val.get(); } res.set(sum); context.write(key,res); } } public static void main(String[] args) throws Exception { int exitCode = ToolRunner.run(new WordCountApp(), args); System.exit(exitCode); } }
用eclipse连接虚拟机hadoop集群执行MapReduce程序,但是报以下错误,请问如何解决?
# 说明:eclipse中关于hadoop的各项advanced parameter参数均已按配置文件进行配置。但在执行过程中还是报如下错误,请问如何解决。 # 执行日志: 2018-09-22 22:59:11,429 INFO [org.apache.commons.beanutils.FluentPropertyBeanIntrospector] - Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property. 2018-09-22 22:59:11,443 WARN [org.apache.hadoop.metrics2.impl.MetricsConfig] - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2018-09-22 22:59:11,496 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - Scheduled Metric snapshot period at 10 second(s). 2018-09-22 22:59:11,496 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - JobTracker metrics system started 2018-09-22 22:59:20,863 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-09-22 22:59:20,879 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2018-09-22 22:59:20,928 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input files to process : 1 2018-09-22 22:59:20,984 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2018-09-22 22:59:21,072 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1513265977_0001 2018-09-22 22:59:21,074 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Executing with tokens: [] 2018-09-22 22:59:21,950 INFO [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Creating symlink: \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv <- G:\java_workspace\MapReduce_DEMO/movies.csv 2018-09-22 22:59:21,995 WARN [org.apache.hadoop.fs.FileUtil] - Command 'E:\hadoop-3.0.0\bin\winutils.exe symlink G:\java_workspace\MapReduce_DEMO\movies.csv \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv' failed 1 with: CreateSymbolicLink error (1314): ??????????? 2018-09-22 22:59:21,995 WARN [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Failed to create symlink: \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv <- G:\java_workspace\MapReduce_DEMO/movies.csv 2018-09-22 22:59:21,996 INFO [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Localized hdfs://192.168.5.110:9000/temp/input/movies.csv as file:/tmp/hadoop-启政先生/mapred/local/1537628361150/movies.csv 2018-09-22 22:59:22,046 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2018-09-22 22:59:22,047 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1513265977_0001 2018-09-22 22:59:22,047 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2018-09-22 22:59:22,051 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2018-09-22 22:59:22,051 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2018-09-22 22:59:22,052 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-09-22 22:59:22,100 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2018-09-22 22:59:22,101 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1513265977_0001_m_000000_0 2018-09-22 22:59:22,120 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2018-09-22 22:59:22,120 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2018-09-22 22:59:22,128 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2018-09-22 22:59:22,169 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@7ef907ef 2018-09-22 22:59:22,172 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: hdfs://192.168.5.110:9000/temp/input/ratings.csv:0+2438233 ----------cachePath=/temp/input/movies.csv---------- 2018-09-22 22:59:22,226 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2018-09-22 22:59:22,233 WARN [org.apache.hadoop.mapred.LocalJobRunner] - job_local1513265977_0001 java.lang.Exception: java.io.FileNotFoundException: \temp\input\movies.csv (系统找不到指定的路径。) at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) Caused by: java.io.FileNotFoundException: \temp\input\movies.csv (系统找不到指定的路径。) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileReader.<init>(Unknown Source) at MovieJoinExercise1.MovieJoin$MovieJoinMapper.setup(MovieJoin.java:79) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 2018-09-22 22:59:23,051 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1513265977_0001 running in uber mode : false 2018-09-22 22:59:23,052 INFO [org.apache.hadoop.mapreduce.Job] - map 0% reduce 0% 2018-09-22 22:59:23,053 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1513265977_0001 failed with state FAILED due to: NA 2018-09-22 22:59:23,058 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 0
自己写的hadoop ,MapReduce程序不能并行
** 学习hadoop有一段时间了,在写hadoop 程序时,尽管是根据官方例子,套着模板写出的,但是不能达到真正意义上的并行,也就是说,各分机没有任务运行。 ------------------------------------------------------------------------ ** 运行环境如下: 操作系统: centOS6.3 32位, jdk1.7, hadoop-1.0.3, 1台master,3台worker。 为了具体说明问题,程序如下: package campus; import java.io.IOException; import java.net.URI; import java.util.Set; import java.util.TreeSet; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.GzipCodec; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.Reducer.Context; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class TestSmsMR { // map public static class TSmsMap extends Mapper<Object, Text, Text, Text> { private static Text keyWord = new Text(); //第一个结点 private static Text valueWord = new Text(); //第二个结点 public void map(Object key, Text value, Context context) { // value: tag: 0 1 u String line = value.toString(); String[] arr = line.split(" |\u0009|\\|"); // 通过 空格、 \t、 | 分割字符串 if ( !(arr[0].equals(arr[1])) ) { try { String tmpKey = arr[0]; String tmpValue = ""; for(int i = 1; i < arr.length; i ++){ tmpValue += arr[i] + " "; } keyWord.set(tmpKey); valueWord.set(tmpValue); // 数据是非对称的,这就需要使用一次 write context.write(keyWord, valueWord); // context.write(valueWord, keyWord); //添加这句的话,必须先看图文件,如果重复则不需要这一行 } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } // 这种方法可行 } } } //reduce write<Text,Set<Text>> public static class TSmsReduce extends Reducer<Text, Text, Text, Text> { private static Text keyStr = new Text(); private static Text valueStr = new Text(); public void reduce(Text key, Iterable<Text> values,Context context) { String writeKey = key.toString(); String writeValues = ""; for (Text val : values) { writeValues += val.toString() + "\t"; } keyStr.set(writeKey); valueStr.set(writeValues); // System.out.println("writeKey: " + writeKey + "\twriteValues: " + writeValues); try { context.write(keyStr, valueStr); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } public static void preExectue(String inputPath, String outputPath) throws Exception { Configuration conf = new Configuration(); // conf.setBoolean("mapred.compress.map.output", true); conf.setBoolean("mapred.output.compress", true); // conf.setIfUnset("mapred.map.output.compression.type", "BLOCK"); conf.setClass("mapred.map.output.compression.codec", GzipCodec.class, CompressionCodec.class); conf.addResource(new Path("/usr/hadoop/conf/core-site.xml")); conf.addResource(new Path("/usr/hadoop/conf/hdfs-site.xml")); // 如果 outputPath 存在,那么先删除 Path outPutPath = new Path(outputPath); FileSystem fs = FileSystem.get(URI.create(outputPath), conf); fs.delete(outPutPath); // 自己添加路径 String[] ars = new String[] { inputPath, outputPath }; String[] otherArgs = new GenericOptionsParser(conf, ars) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: sotTest <in> <out>"); System.exit(2); } Job job = new Job(conf, "TestSmsMR"); job.setJarByClass(TestSmsMR.class); job.setMapperClass(TSmsMap.class); job.setReducerClass(TSmsReduce.class); // job.setNumReduceTasks(4); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); // FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); if (job.waitForCompletion(true)) { System.out.println("The preprocess mapreduce has finished!"); } } //main函数测试都好着,为什么就不能并行呢 public static void main(String[] args) throws Exception { Long startTime = System.currentTimeMillis(); String srcPath = "campusSms"; String dstPath = "campusSmsLabelOut"; preExectue(srcPath,dstPath); Long runTime = System.currentTimeMillis() - startTime; System.out.println("run time: " + runTime); } } 还是觉得问题出在这个函数上: public static void preExectue(String inputPath, String outputPath) 运行前提是: 环境已搭建好,而且测试主机分机都能正常通信,且主机从机都起来了。希望解答时,能多考虑些编程方面的问题。 该程序运行起来,就是在主机上跑,MapReduce机制到分机并没有得到任务,运行的数据250M,按照hadoop默认块64M来算,也应该分为4块,3台分机应得到执行任务的,可是程序就是不能并行。 请有经验的hadoop学习实践者给予指导。 在集群上运行Pi值计算,都能看到并行。就是自己写的程序,不行!......
请问hadoop如何获得map任务的执行时间
请问hadoop中有哪个方法可以获得map任务的执行时间~~~!!!
Hadoop集群执行wordcount出现的一些报错信息
我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体
源码阅读(19):Java中主要的Map结构——HashMap容器(下1)
(接上文《源码阅读(18):Java中主要的Map结构——HashMap容器(中)》) 3.4.4、HashMap添加K-V键值对(红黑树方式) 上文我们介绍了在HashMap中table数组的某个索引位上,基于单向链表添加新的K-V键值对对象(HashMap.Node&lt;K, V&gt;类的实例),但是我们同时知道在某些的场景下,HashMap中table数据的某个索引位上,数据是按照红黑树
c++制作的植物大战僵尸,开源,一代二代结合游戏
    此游戏全部由本人自己制作完成。游戏大部分的素材来源于原版游戏素材,少部分搜集于网络,以及自己制作。 此游戏为同人游戏而且仅供学习交流使用,任何人未经授权,不得对本游戏进行更改、盗用等,否则后果自负。 目前有六种僵尸和六种植物,植物和僵尸的动画都是本人做的。qq:2117610943 开源代码下载 提取码:3vzm 点击下载--&gt; 11月28日 新增四种植物 统一植物画风,全部修
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成喔~) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
深度学习图像算法在内容安全领域的应用
互联网给人们生活带来便利的同时也隐含了大量不良信息,防范互联网平台有害内容传播引起了多方面的高度关注。本次演讲从技术层面分享网易易盾在内容安全领域的算法实践经验,包括深度学习图
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
Python 编程实用技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法,并会持续更新。
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
"狗屁不通文章生成器"登顶GitHub热榜,分分钟写出万字形式主义大作
GitHub 被誉为全球最大的同性交友网站,……,陪伴我们已经走过 10+ 年时间,它托管了大量的软件代码,同时也承载了程序员无尽的欢乐。 万字申请,废话报告,魔幻形式主义大作怎么写?兄dei,狗屁不通文章生成器了解一下。这个富有灵魂的项目名吸引了众人的目光。项目仅仅诞生一周,便冲上了GitHub趋势榜榜首(Js中文网 -前端进阶资源教程)、是榜首哦
推荐几款比较实用的工具,网站
1.盘百度PanDownload 这个云盘工具是免费的,可以进行资源搜索,提速(偶尔会抽风????) 不要去某站买付费的???? PanDownload下载地址 2.BeJSON 这是一款拥有各种在线工具的网站,推荐它的主要原因是网站简洁,功能齐全,广告相比其他广告好太多了 bejson网站 3.二维码美化 这个网站的二维码美化很好看,网站界面也很...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
相关热词 c#选择结构应用基本算法 c# 收到udp包后回包 c#oracle 头文件 c# 序列化对象 自定义 c# tcp 心跳 c# ice连接服务端 c# md5 解密 c# 文字导航控件 c#注册dll文件 c#安装.net
立即提问