org.apache.hadoop.mapred.LocalJobRunner这个类在那个包里?

我在用sqoop1的javaapi操作,但是一执行命令就会报这个错,hadoop集群并不在运行程序的机器上,我是缺少这个类么,我翻了一般依赖里面确实没有

 Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapred.LocalJobRunner.<init>(Lorg/apache/hadoop/conf/Configuration;)V
    at org.apache.hadoop.mapred.LocalClientProtocolProvider.create(LocalClientProtocolProvider.java:42)
    at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:95)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
    at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1260)
    at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1256)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
    at org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1284)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:322)
    at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:299)
    at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:440)
    at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
    at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
    at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
    at com.mshuoke.datagw.impl.sqoop.SqoopTest.main(SqoopTest.java:52)
09:55:47.069 [Thread-4] DEBUG org.apache.hadoop.util.ShutdownHookManager - ShutdownHookManger complete shutdown.

1个回答

个人建议,可以在MAVEN中试一试

u011856283
你好杰米 项目jar重复,有两个org.apache.hadoop都有这个包,冲突造成的错误,去除了一个解决了
接近 2 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
eclipse运行hadoop mapreduce程序如下错误

2017-09-06 15:48:42,677 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1460)) - Starting flush of map output 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1482)) - Spilling map output 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1483)) - bufstart = 0; bufend = 108; bufvoid = 104857600 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1485)) - kvstart = 26214396(104857584); kvend = 26214352(104857408); length = 45/6553600 2017-09-06 15:48:42,733 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1667)) - Finished spill 0 2017-09-06 15:48:42,743 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1038)) - Task:attempt_local1469942249_0001_m_000000_0 is done. And is in the process of committing 2017-09-06 15:48:42,751 INFO [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete. 2017-09-06 15:48:42,783 WARN [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:run(560)) - job_local1469942249_0001 java.lang.Exception: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()J at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()J at org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:872) at org.apache.hadoop.mapred.Task.updateCounters(Task.java:1021) at org.apache.hadoop.mapred.Task.done(Task.java:1040) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-09-06 15:48:43,333 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1360)) - Job job_local1469942249_0001 running in uber mode : false 2017-09-06 15:48:43,335 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 0% reduce 0% 2017-09-06 15:48:43,337 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Job job_local1469942249_0001 failed with state FAILED due to: NA 2017-09-06 15:48:43,352 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1385)) - Counters: 10 Map-Reduce Framework Map input records=12 Map output records=12 Map output bytes=108 Map output materialized bytes=0 Input split bytes=104 Combine input records=0 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 File Input Format Counters Bytes Read=132 Finished

hadoop运行出如下错,郁闷死我了

Exception in thread "main" java.io.IOException: Cannot run program "chmod": CreateProcess error=2, ????????? at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at org.apache.hadoop.util.Shell.runCommand(Shell.java:149) at org.apache.hadoop.util.Shell.run(Shell.java:134) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286) at org.apache.hadoop.util.Shell.execCommand(Shell.java:354) at org.apache.hadoop.util.Shell.execCommand(Shell.java:337) at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:481) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:473) at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:280) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:372) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:465) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:372) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:208) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197) at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:92) at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800) at org.apache.hadoop.mapreduce.Job.submit(Job.java:432) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447) at cn.xyp.hadoop.test1.run(test1.java:63) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at cn.xyp.hadoop.test1.main(test1.java:21) Caused by: java.io.IOException: CreateProcess error=2, ????????? at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(ProcessImpl.java:81) at java.lang.ProcessImpl.start(ProcessImpl.java:30) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 24 more

第一个hadoop程序就出现问题,就大佬帮忙看看。

如果程序打成jar包,用命令是可以运行的。但是在idea中就出现这样的错误 17/03/11 15:21:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Exception in thread "main" java.lang.VerifyError: Bad type on operand stack Exception Details: Location: org/apache/hadoop/mapred/JobTrackerInstrumentation.create(Lorg/apache/hadoop/mapred/JobTracker;Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/mapred/JobTrackerInstrumentation; @5: invokestatic Reason: Type 'org/apache/hadoop/metrics2/lib/DefaultMetricsSystem' (current frame, stack[2]) is not assignable to 'org/apache/hadoop/metrics2/MetricsSystem' Current Frame: bci: @5 flags: { } locals: { 'org/apache/hadoop/mapred/JobTracker', 'org/apache/hadoop/mapred/JobConf' } stack: { 'org/apache/hadoop/mapred/JobTracker', 'org/apache/hadoop/mapred/JobConf', 'org/apache/hadoop/metrics2/lib/DefaultMetricsSystem' } Bytecode: 0x0000000: 2a2b b200 03b8 0004 b0 at org.apache.hadoop.mapred.LocalJobRunner.<init>(LocalJobRunner.java:573) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:494) at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:479) at org.apache.hadoop.mapreduce.Job$1.run(Job.java:563) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.connect(Job.java:561) at org.apache.hadoop.mapreduce.Job.submit(Job.java:549) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at com.hadoop.maxtemperature.MaxTemperature.main(MaxTemperature.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.7.3</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-core</artifactId> <version>1.2.1</version> </dependency> </dependencies>

用eclipse连接虚拟机hadoop集群执行MapReduce程序,但是报以下错误,请问如何解决?

# 说明:eclipse中关于hadoop的各项advanced parameter参数均已按配置文件进行配置。但在执行过程中还是报如下错误,请问如何解决。 # 执行日志: 2018-09-22 22:59:11,429 INFO [org.apache.commons.beanutils.FluentPropertyBeanIntrospector] - Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property. 2018-09-22 22:59:11,443 WARN [org.apache.hadoop.metrics2.impl.MetricsConfig] - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2018-09-22 22:59:11,496 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - Scheduled Metric snapshot period at 10 second(s). 2018-09-22 22:59:11,496 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - JobTracker metrics system started 2018-09-22 22:59:20,863 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-09-22 22:59:20,879 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2018-09-22 22:59:20,928 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input files to process : 1 2018-09-22 22:59:20,984 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2018-09-22 22:59:21,072 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1513265977_0001 2018-09-22 22:59:21,074 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Executing with tokens: [] 2018-09-22 22:59:21,950 INFO [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Creating symlink: \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv <- G:\java_workspace\MapReduce_DEMO/movies.csv 2018-09-22 22:59:21,995 WARN [org.apache.hadoop.fs.FileUtil] - Command 'E:\hadoop-3.0.0\bin\winutils.exe symlink G:\java_workspace\MapReduce_DEMO\movies.csv \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv' failed 1 with: CreateSymbolicLink error (1314): ??????????? 2018-09-22 22:59:21,995 WARN [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Failed to create symlink: \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv <- G:\java_workspace\MapReduce_DEMO/movies.csv 2018-09-22 22:59:21,996 INFO [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Localized hdfs://192.168.5.110:9000/temp/input/movies.csv as file:/tmp/hadoop-启政先生/mapred/local/1537628361150/movies.csv 2018-09-22 22:59:22,046 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2018-09-22 22:59:22,047 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1513265977_0001 2018-09-22 22:59:22,047 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2018-09-22 22:59:22,051 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2018-09-22 22:59:22,051 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2018-09-22 22:59:22,052 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-09-22 22:59:22,100 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2018-09-22 22:59:22,101 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1513265977_0001_m_000000_0 2018-09-22 22:59:22,120 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2018-09-22 22:59:22,120 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2018-09-22 22:59:22,128 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2018-09-22 22:59:22,169 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@7ef907ef 2018-09-22 22:59:22,172 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: hdfs://192.168.5.110:9000/temp/input/ratings.csv:0+2438233 ----------cachePath=/temp/input/movies.csv---------- 2018-09-22 22:59:22,226 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2018-09-22 22:59:22,233 WARN [org.apache.hadoop.mapred.LocalJobRunner] - job_local1513265977_0001 java.lang.Exception: java.io.FileNotFoundException: \temp\input\movies.csv (系统找不到指定的路径。) at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) Caused by: java.io.FileNotFoundException: \temp\input\movies.csv (系统找不到指定的路径。) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileReader.<init>(Unknown Source) at MovieJoinExercise1.MovieJoin$MovieJoinMapper.setup(MovieJoin.java:79) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 2018-09-22 22:59:23,051 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1513265977_0001 running in uber mode : false 2018-09-22 22:59:23,052 INFO [org.apache.hadoop.mapreduce.Job] - map 0% reduce 0% 2018-09-22 22:59:23,053 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1513265977_0001 failed with state FAILED due to: NA 2018-09-22 22:59:23,058 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 0

Hadoop序列化问题,实现WritableComparable,readFields报错EOFException

``` public class MyKey implements WritableComparable<MyKey> { //flag == 1 : user //flag == 0 : shopping private Integer flag; private Integer u_id; private Integer s_id; private Integer s_u_id; private String u_info; private String s_info; @Override public int compareTo(MyKey o) { if (flag.equals(1)){ //user return u_id - o.u_id; }else { //shopping return s_id - o.s_id; } } @Override public void write(DataOutput out) throws IOException { out.writeInt(flag); out.writeInt(u_id); out.writeInt(s_id); out.writeInt(s_u_id); out.writeUTF(u_info); out.writeUTF(s_info); } @Override public void readFields(DataInput in) throws IOException { flag = in.readInt(); u_id = in.readInt(); s_id = in.readInt(); s_u_id = in.readInt(); u_info = in.readUTF(); s_info = in.readUTF(); } } ``` 报错异常 2018-10-08 19:55:15,246 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2018-10-08 19:55:15,250 INFO mapred.LocalJobRunner: reduce task executor complete. 2018-10-08 19:55:15,253 WARN mapred.LocalJobRunner: job_local85671337_0001 java.lang.Exception: java.lang.RuntimeException: java.io.EOFException at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:559) Caused by: java.lang.RuntimeException: java.io.EOFException at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:165) at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:158) at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121) at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:302) at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390) at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:347) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at sortjoin.MyKey.readFields(MyKey.java:43) at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:158) ... 12 more 2018-10-08 19:55:15,962 INFO mapreduce.Job: Job job_local85671337_0001 running in uber mode : false 2018-10-08 19:55:15,964 INFO mapreduce.Job: map 100% reduce 0%

急!!!hadoop下eclipse程序运行出现如下问题,怎么办

15/04/09 14:47:42 INFO util.NativeCodeLoader: Loaded the native-hadoop library 15/04/09 14:47:42 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 15/04/09 14:47:42 WARN snappy.LoadSnappy: Snappy native library not loaded 15/04/09 14:47:42 INFO mapred.FileInputFormat: Total input paths to process : 1 15/04/09 14:47:43 INFO mapred.JobClient: Running job: job_local_0001 15/04/09 14:47:44 INFO util.ProcessTree: setsid exited with exit code 0 15/04/09 14:47:44 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1e29f45 15/04/09 14:47:44 INFO mapred.MapTask: numReduceTasks: 1 15/04/09 14:47:44 INFO mapred.MapTask: io.sort.mb = 100 15/04/09 14:47:44 INFO mapred.JobClient: map 0% reduce 0% 15/04/09 14:47:46 INFO mapred.MapTask: data buffer = 79691776/99614720 15/04/09 14:47:46 INFO mapred.MapTask: record buffer = 262144/327680 null 15/04/09 14:47:46 WARN mapred.LocalJobRunner: job_local_0001 java.lang.NullPointerException at java.util.StringTokenizer.<init>(StringTokenizer.java:199) at java.util.StringTokenizer.<init>(StringTokenizer.java:236) at word.word$Map.map(word.java:124) at word.word$Map.map(word.java:1) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212) 15/04/09 14:47:47 INFO mapred.JobClient: Job complete: job_local_0001 15/04/09 14:47:47 INFO mapred.JobClient: Counters: 0 15/04/09 14:47:47 INFO mapred.JobClient: Job Failed: NA Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265) at word.word.main(word.java:191)

MapReducer 写入到数据库 报错

## 【 DBUserWritable 类 】 package org.neworigin.com.Database; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.mapreduce.lib.db.DBWritable; public class DBUserWritable implements DBWritable,WritableComparable{ private String name=""; private String sex=""; private int age=0; private int num=0; private String department=""; private String tables=""; @Override public String toString() { return "DBUserWritable [name=" + name + ", sex=" + sex + ", age=" + age + ", department=" + department + "]"; } public DBUserWritable(DBUserWritable d){ this.name=d.getName(); this.sex=d.getSex(); this.age=d.getAge(); this.num=d.getNum(); this.department=d.getDepartment(); this.tables=d.getTables(); } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getSex() { return sex; } public void setSex(String sex) { this.sex = sex; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public int getNum() { return num; } public void setNum(int num) { this.num = num; } public String getDepartment() { return department; } public void setDepartment(String department) { this.department = department; } public String getTables() { return tables; } public void setTables(String tables) { this.tables = tables; } public DBUserWritable(String name, String sex, int age, int num, String department, String tables) { super(); this.name = name; this.sex = sex; this.age = age; this.num = num; this.department = department; this.tables = tables; } public DBUserWritable() { super(); // TODO Auto-generated constructor stub } public void write(DataOutput out) throws IOException { // TODO Auto-generated method stub out.writeUTF(name); out.writeUTF(sex); out.writeInt(age); out.writeInt(num); out.writeUTF(department); out.writeUTF(tables); } public void readFields(DataInput in) throws IOException { // TODO Auto-generated method stub name = in.readUTF(); sex=in.readUTF(); age=in.readInt(); num=in.readInt(); department=in.readUTF(); tables=in.readUTF(); } public int compareTo(Object o) { // TODO Auto-generated method stub return 0; } public void write(PreparedStatement statement) throws SQLException { // TODO Auto-generated method stub statement.setString(1, this.getName()); statement.setString(2, this.getSex()); statement.setInt(3, this.getAge()); statement.setString(4, this.getDepartment()); } public void readFields(ResultSet resultSet) throws SQLException { // TODO Auto-generated method stub this.name=resultSet.getString(1); this.sex=resultSet.getString(2); this.age=resultSet.getInt(3); this.department=resultSet.getString(4); } } ## 【mapper】 package org.neworigin.com.Database; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class UserDBMapper extends Mapper<LongWritable, Text, Text, DBUserWritable> { DBUserWritable DBuser= new DBUserWritable(); @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, DBUserWritable>.Context context) throws IOException, InterruptedException { String[] values=value.toString().split(" "); if(values.length==4){ DBuser.setName(values[0]); DBuser.setSex(values[1]); DBuser.setAge(Integer.parseInt(values[2])); DBuser.setNum(Integer.parseInt(values[3])); DBuser.setTables("t1"); System.out.println("mapper---t1---------------"+DBuser); context.write(new Text(values[3]),DBuser); } if(values.length==2){ DBuser.setNum(Integer.parseInt(values[0])); DBuser.setDepartment(values[1]); DBuser.setTables("t2"); context.write(new Text(values[0]),DBuser); //System.out.println("mapper --t2"+"--"+values[0]+"----"+DBuser); } } } ## 【reducer 】 package org.neworigin.com.Database; import java.io.IOException; import java.util.LinkedList; import java.util.List; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class UserDBReducer extends Reducer<Text, DBUserWritable,NullWritable,DBUserWritable> { // public DBUserWritable db= new DBUserWritable(); @Override protected void reduce(Text k2, Iterable<DBUserWritable> v2, Reducer<Text, DBUserWritable, NullWritable,DBUserWritable>.Context context) throws IOException, InterruptedException { String Name=""; List<DBUserWritable> list=new LinkedList<DBUserWritable>(); for(DBUserWritable val : v2){ list.add(new DBUserWritable(val));//new 一个对象 给list // System.out.println("[table]"+val.getTables()+"----key"+k2+"---"+val); if(val.getTables().equals("t2")){ Name=val.getDepartment(); } } //键是 num for(DBUserWritable join : list){ System.out.println("[table]"+join.getTables()+"----key"+k2+"---"+join); if(join.getTables().equals("t1")){ join.setDepartment(Name); System.out.println("db-----"+join); context.write(NullWritable.get(), join); } } } } ## 【app】 package org.neworigin.com.Database; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.db.DBConfiguration; import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class UserDBAPP { public static void main(String[] args) throws Exception, URISyntaxException { // TODO Auto-generated method stub String INPUT_PATH="file:///E:/BigData_eclipse_database/Database/data/table1"; String INPUT_PATH1="file:///E:/BigData_eclipse_database/Database/data/table2"; // String OUTPUT_PARH="file:///E:/BigData_eclipse_database/Database/data/output"; Configuration conf = new Configuration(); // FileSystem fs=FileSystem.get(new URI(OUTPUT_PARH),conf); // if(fs.exists(new Path(OUTPUT_PARH))){ // fs.delete(new Path(OUTPUT_PARH)); // } Job job = new Job(conf,"mydb"); //设置数据库配置 DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://localhost/hadoop", "root", "123456"); FileInputFormat.addInputPaths(job,INPUT_PATH); FileInputFormat.addInputPaths(job,INPUT_PATH1); job.setMapperClass(UserDBMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(DBUserWritable.class); job.setReducerClass(UserDBReducer.class); job.setOutputKeyClass(NullWritable.class); job.setOutputValueClass(DBUserWritable.class); // FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PARH)); //设置输出路径 DBOutputFormat.setOutput(job,"user_tables", "name","sex","age","department"); job.setOutputFormatClass(DBOutputFormat.class); boolean re = job.waitForCompletion(true); System.out.println(re); } } 【报错】ps 表链接 ,写到本地没问题 写到数据库 就报错; 17/11/10 11:39:11 WARN output.FileOutputCommitter: Output Path is null in cleanupJob() 17/11/10 11:39:11 WARN mapred.LocalJobRunner: job_local1812680657_0001 java.lang.Exception: java.io.IOException at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) Caused by: java.io.IOException at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:541) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 running in uber mode : false 17/11/10 11:39:12 INFO mapreduce.Job: map 100% reduce 0% 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 failed with state FAILED due to: NA 17/11/10 11:39:12 INFO mapreduce.Job: Counters: 35

按照步骤配置的,找了好几天不知道是哪里出现了问题

[Fatal Error] :79:136: Character reference "&#24" is an invalid XML character. Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException; lineNumber: 79; columnNumber: 136; Character reference "&#24" is an invalid XML character. at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1168) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1040) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980) at org.apache.hadoop.conf.Configuration.get(Configuration.java:382) at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:1662) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:215) at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:93) at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249) at org.apache.nutch.crawl.Injector.inject(Injector.java:217) at org.apache.nutch.crawl.Crawl.main(Crawl.java:124) Caused by: org.xml.sax.SAXParseException; lineNumber: 79; columnNumber: 136; Character reference "&#24" is an invalid XML character. at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:121) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1092) ... 12 more

求救高手,Mapreduce导入数据到Hadoop报ClassNotFoundException

最近在用Mapreduce连Hadoop,出现各类问题。请高手答疑。 环境: hadoop:2.7.0 Hbase:1.0.1.1 刚开始的时候报:HBaseConfiguration 找不到,百度之,说将 hbase的lib下的jar复制到hadoop的lib下 复制之,无果,找各类参考资料修改hadoop参数,都还是报异常。 最后无奈,只能修改 hadoop-env.sh,将 hbase 的lib加入到 classpath下。 ![图片说明](https://img-ask.csdn.net/upload/201509/14/1442242731_955123.jpg) 最后终于不报这个异常。 可是接着更加无解的事情发生了。 ![图片说明](https://img-ask.csdn.net/upload/201509/14/1442242933_762859.jpg) 报出各类我自己定义的类找不到。 网上找遍了所有的贴,没找到答案。 ``` public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); conf = HBaseConfiguration.create(conf); if (args.length != 6) { System.err.println("Usage: MroFormat <in-mro> <in-xdr> <sample tbname> <event tbname>"); System.exit(2); } //makeConfig(conf, args); String inpath1 = args[0]; String inpath2 = args[1]; Job job = Job.getInstance(conf,"MyTest"); job.setNumReduceTasks(40); job.setJarByClass(Main.class); job.setReducerClass(ReduceDeal.MroFormatReducer.class); //job.setReducerClass(ReduceDeal.TestReducer.class); job.setSortComparatorClass(MapDeal.SortKeyComparator.class); job.setPartitionerClass(MapDeal.CellIDPartitioner.class); job.setGroupingComparatorClass(MapDeal.SortKeyGroupComparator.class); job.setMapOutputKeyClass(CellTimeKeyPare.class); job.setMapOutputValueClass(Text.class); MultipleInputs.addInputPath(job, new Path(inpath1), KeyValueTextInputFormat.class, MapDeal.MroMapper.class); MultipleInputs.addInputPath(job, new Path(inpath2), TextInputFormat.class, MapDeal.XdrMapper.class); job.setOutputFormatClass(MultiTableOutputFormat.class); //job.setOutputFormatClass(NullOutputFormat.class); //LOG.info(job.getPartitionerClass().getName()); //TableMapReduceUtil.addDependencyJars(job); //TableMapReduceUtil.addDependencyJars(job.getConfiguration()); //TableMapReduceUtil.initTableReducerJob("tab1", ReduceDeal.MroFormatReducer.class, job); System.exit(job.waitForCompletion(true) ? 0 : 1); } ``` 在本机的win7环境下,代码能跑过,但打包放到服务器就报错,找不着类。去掉了“conf = HBaseConfiguration.create(conf);”这段代码后,下面的 处理类就可以找到。请大牛们帮忙看看,非常感谢

nutch:unable to create new native thread

使用nutch爬取时,报错,以下是日志,哪位大神给我解答下 java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:597) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at org.apache.nutch.parse.ParseUtil.runParser(ParseUtil.java:159) at org.apache.nutch.parse.ParseUtil.parse(ParseUtil.java:93) at org.apache.nutch.parse.ParseSegment.map(ParseSegment.java:97) at org.apache.nutch.parse.ParseSegment.map(ParseSegment.java:44) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)

mapreduce,貌似是overide,求助

报错: zxy@zxy-virtual-machine:/usr/hadoop/hadoop-2.4.0$ hadoop jar WordCount.jar WordCount /input /output 15/04/23 07:12:49 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 15/04/23 07:12:49 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 15/04/23 07:12:50 INFO input.FileInputFormat: Total input paths to process : 1 15/04/23 07:12:50 INFO mapreduce.JobSubmitter: number of splits:1 15/04/23 07:12:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local168934583_0001 15/04/23 07:12:51 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/staging/zxy168934583/.staging/job_local168934583_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 15/04/23 07:12:51 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/staging/zxy168934583/.staging/job_local168934583_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 15/04/23 07:12:52 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/local/localRunner/zxy/job_local168934583_0001/job_local168934583_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 15/04/23 07:12:52 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/local/localRunner/zxy/job_local168934583_0001/job_local168934583_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 15/04/23 07:12:52 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 15/04/23 07:12:52 INFO mapreduce.Job: Running job: job_local168934583_0001 15/04/23 07:12:52 INFO mapred.LocalJobRunner: OutputCommitter set in config null 15/04/23 07:12:52 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 15/04/23 07:12:52 INFO mapred.LocalJobRunner: Waiting for map tasks 15/04/23 07:12:52 INFO mapred.LocalJobRunner: Starting task: attempt_local168934583_0001_m_000000_0 15/04/23 07:12:52 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 15/04/23 07:12:52 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/input/data.txt:0+57 15/04/23 07:12:52 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 15/04/23 07:12:53 INFO mapreduce.Job: Job job_local168934583_0001 running in uber mode : false 15/04/23 07:12:53 INFO mapreduce.Job: map 0% reduce 0% 15/04/23 07:12:55 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 15/04/23 07:12:55 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 15/04/23 07:12:55 INFO mapred.MapTask: soft limit at 83886080 15/04/23 07:12:55 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 15/04/23 07:12:55 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 15/04/23 07:12:55 INFO mapred.MapTask: Starting flush of map output 15/04/23 07:12:55 INFO mapred.MapTask: Spilling map output 15/04/23 07:12:55 INFO mapred.MapTask: bufstart = 0; bufend = 36; bufvoid = 104857600 15/04/23 07:12:55 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600 15/04/23 07:12:55 INFO mapred.MapTask: Finished spill 0 15/04/23 07:12:55 INFO mapred.LocalJobRunner: map task executor complete. 15/04/23 07:12:55 WARN mapred.LocalJobRunner: job_local168934583_0001 java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 3 at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.ArrayIndexOutOfBoundsException: 3 at WordCount$TokenizerMapper.map(WordCount.java:35) at WordCount$TokenizerMapper.map(WordCount.java:1) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 15/04/23 07:12:55 INFO mapreduce.Job: Job job_local168934583_0001 failed with state FAILED due to: NA 15/04/23 07:12:55 INFO mapreduce.Job: Counters: 0 zxy@zxy-virtual-machine:/usr/hadoop/hadoop-2.4.0$ hadoop fs -ls /output

Eclipse上运行MapReduce程序时,win10系统用户名中间有空格导致tmp文件生成&读取错误

报错如下: ``` SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.10.0/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/slf4j/slf4j-simple/1.6.6/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 2019-09-05 10:27:02,488 WARN [main] impl.MetricsConfig (MetricsConfig.java:134) - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2019-09-05 10:27:04,715 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:147) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2019-09-05 10:27:04,743 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:480) - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2019-09-05 10:27:10,228 WARN [pool-8-thread-1] impl.MetricsSystemImpl (MetricsSystemImpl.java:151) - JobTracker metrics system already initialized! 2019-09-05 10:27:10,326 WARN [Thread-6] mapred.LocalJobRunner$Job (LocalJobRunner.java:590) - job_local64686135_0001 java.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1 at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) ~[hadoop-mapreduce-client-common-3.1.2.jar:?] at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:559) [hadoop-mapreduce-client-common-3.1.2.jar:?] Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:377) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:347) ~[hadoop-mapreduce-client-common-3.1.2.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_221] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_221] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_221] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_221] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_221] Caused by: java.io.FileNotFoundException: File D:/tmp/hadoop-William%20Scott/mapred/local/localRunner/icss/jobcache/job_local64686135_0001/attempt_local64686135_0001_m_000000_0/output/file.out.index does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:211) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtils.java:152) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:71) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:57) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(LocalFetcher.java:125) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetcher.java:103) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher.java:86) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] ``` 目前的情况是win10是用微软账号的登录的,姓名之间会自动生成一个空格,不是太方便更改账户。Hadoop运行环境是放在D盘的,但不是根目录。 请问有没有办法让本地的tmp文件换个地方生成,或者更改hadoop-William%20Scott文件夹的名字。 谢谢。

Nutch+MongoDB+ElasticSearch+Kibana搭建inject操作异常

linux搭建Nutch+MongoDB+ElasticSearch+Kibana环境环境,nutch是apache-nutch-2.3.1-src.tar.gz源码编译的。 参考:http://blog.csdn.net/github_27609763/article/details/50597427进行搭建, 但是执行到./bin/nutch inject urls/报错,跪求大神指教 其中配置如下 nutch-site.xml ``` <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>storage.data.store.class</name> <value>org.apache.gora.mongodb.store.MongoStore</value> <description>Default class for storing data</description> </property> <property> <name>http.agent.name</name> <value>Hist Crawler</value> </property> <property> <name>plugin.includes</name> <value>protocol-(httphttpclient)urlfilter-regexindex-(basicmore)query-(basicsiteurllang)indexer-elasticnutch-extensionpointsparse-(texthtmlmsexcelmswordmspowerpointpdf)summary-basicscoring-opicurlnormalizer-(passregexbasic)parse-(htmltikametatags)index-(basicanchormoremetadata)</value> </property> <property> <name>elastic.host</name> <value>localhost</value> </property> <property> <name>elastic.cluster</name> <value>hist</value> </property> <property> <name>elastic.index</name> <value>nutch</value> </property> <property> <name>parser.character.encoding.default</name> <value>utf-8</value> </property> <property> <name>http.content.limit</name> <value>6553600</value> </property> </configuration> ``` regex-urlfilter.txt的配置如下 ``` # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # The default url filter. # Better for whole-internet crawling. # Each non-comment, non-blank line contains a regular expression # prefixed by '+' or '-'. The first matching pattern in the file # determines whether a URL is included or ignored. If no pattern # matches, the URL is ignored. # skip file: ftp: and mailto: urls -^(file|ftp|mailto): # skip image and other suffixes we can't yet parse # for a more extensive coverage use the urlfilter-suffix plugin -\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$ # skip URLs containing certain characters as probable queries, etc. -[?*!@=] # skip URLs with slash-delimited segment that repeats 3+ times, to break loops -.*(/[^/]+)/[^/]+\1/[^/]+\1/ # accept anything else +^http://([a-z0-9]*\.)*nutch.apache.org/ # +. ``` 另外urls下面的seed.txt配置如下cat ``` [root@jdu4e00u53f7 urls]# pwd /chen/nutch/runtime/local/urls [root@jdu4e00u53f7 urls]# cat seed.txt http://blog.csdn.net/ [root@jdu4e00u53f7 urls]# ``` 最后错误信息如下: ``` 2017-09-25 23:35:17,648 INFO crawl.InjectorJob - InjectorJob: starting at 2017-09-25 23:35:17 2017-09-25 23:35:17,649 INFO crawl.InjectorJob - InjectorJob: Injecting urlDir: urls 2017-09-25 23:35:18,058 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2017-09-25 23:35:19,115 INFO crawl.InjectorJob - InjectorJob: Using class org.apache.gora.mongodb.store.MongoStore as the Gora storage class. 2017-09-25 23:35:20,006 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/staging/root1639902035/.staging/job_local1639902035_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2017-09-25 23:35:20,009 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/staging/root1639902035/.staging/job_local1639902035_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2017-09-25 23:35:20,172 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1639902035_0001/job_local1639902035_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2017-09-25 23:35:20,175 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1639902035_0001/job_local1639902035_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2017-09-25 23:35:20,504 WARN mapred.LocalJobRunner - job_local1639902035_0001 java.lang.Exception: java.lang.RuntimeException: x point org.apache.nutch.net.URLNormalizer not found. at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.RuntimeException: x point org.apache.nutch.net.URLNormalizer not found. at org.apache.nutch.net.URLNormalizers.<init>(URLNormalizers.java:141) at org.apache.nutch.crawl.InjectorJob$UrlMapper.setup(InjectorJob.java:94) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2017-09-25 23:35:21,198 ERROR crawl.InjectorJob - InjectorJob: java.lang.RuntimeException: job failed: name=apache-nutch-2.3.1.jar, jobid=job_local1639902035_0001 at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:120) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:231) at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:252) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:275) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:284) ```

nutch自定义要抓取内容

刚接触nutch,环境是VirtualBox虚拟机安装centos6.5 64位 ,在CentOS下使用svn从官网检出nutch 2.3 。初步的需求就是,根据我自定义的url,通过输入某些关键词(或html标签、或者正则表达式),来把匹配的网页内容抓取下来。后续再进行分析(后话) 我还在学习中,发现nutch2.3版本中,已经用bin/crawl命令取代了老版本的 bin/nutch crawl ,参数列表几乎完全都变了. 我尝试了如下操作: 这是2.3版本的命令参数: bin/crawl Usage: crawl <seedDir> <crawlID> [<solrUrl>] <numberOfRounds> 然后我使用:bin/crawl urls/ MyFirstCrawl http://localhost:8080/solr 6 其中: urls:是我建立的抓取文件所在的上级目录(结构:urls/urls.txt,urls.txt中存了要抓取的页面url) MyFirstCrawl:自定义的crawl名称 solrUrl:这个地址是随便填写的 然后如下错误: /home/release-2.3/runtime/local/bin/nutch generate -D mapred.reduce.tasks=2 -D mapred.child.java.opts=-Xmx1000m -D mapred.reduce.tasks.speculative.execution=false -D mapred.map.tasks.speculative.execution=false -D mapred.compress.map.output=true -topN 50000 -noNorm -noFilter -adddays 0 -crawlId MyFirstCrawl -batchId 1455677914-18990 GeneratorJob: starting at 2016-02-17 10:58:35 GeneratorJob: Selecting best-scoring urls due for fetch. GeneratorJob: starting GeneratorJob: filtering: false GeneratorJob: normalizing: false GeneratorJob: topN: 50000 java.util.NoSuchElementException at java.util.TreeMap.key(TreeMap.java:1221) at java.util.TreeMap.firstKey(TreeMap.java:285) at org.apache.gora.memory.store.MemStore.execute(MemStore.java:125) at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:73) at org.apache.gora.mapreduce.GoraRecordReader.executeQuery(GoraRecordReader.java:68) at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:110) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:531) at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) GeneratorJob: finished at 2016-02-17 10:58:38, time elapsed: 00:00:02 GeneratorJob: generated batch id: 1455677914-18990 containing 0 URLs Generate returned 1 (no new segments created) Escaping loop: no more URLs to fetch now 请问这个错误代表什么?我该怎么样调整。 另外,如果想要完成我开头说的抓取需求 我该做怎样的配置才能实现?

【新手】Hadoop MapReduce 执行中Map没有输出

hadoop - hadoop2.6 分布式 - 简单实例学习 - 统计某年的最高温度和按年份将温度从高到底排序 - 原明卓 - 博客频道 - CSDN.NET http://blog.csdn.net/lablenet/article/details/50608197#java 我按照这篇博客做的,运行结果见图。 ----------------------------------------------- 16/10/19 05:27:51 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 16/10/19 05:27:52 INFO input.FileInputFormat: Total input paths to process : 1 16/10/19 05:27:52 INFO util.NativeCodeLoader: Loaded the native-hadoop library 16/10/19 05:27:52 WARN snappy.LoadSnappy: Snappy native library not loaded 16/10/19 05:27:54 INFO mapred.JobClient: Running job: job_201610190234_0013 16/10/19 05:27:55 INFO mapred.JobClient: map 0% reduce 0% 16/10/19 05:28:24 INFO mapred.JobClient: map 100% reduce 0% 16/10/19 05:28:41 INFO mapred.JobClient: map 100% reduce 20% 16/10/19 05:28:42 INFO mapred.JobClient: map 100% reduce 40% 16/10/19 05:28:50 INFO mapred.JobClient: map 100% reduce 46% 16/10/19 05:28:51 INFO mapred.JobClient: map 100% reduce 60% 16/10/19 05:29:01 INFO mapred.JobClient: map 100% reduce 100% 16/10/19 05:29:01 INFO mapred.JobClient: Job complete: job_201610190234_0013 16/10/19 05:29:01 INFO mapred.JobClient: Counters: 28 16/10/19 05:29:01 INFO mapred.JobClient: Job Counters 16/10/19 05:29:01 INFO mapred.JobClient: Launched reduce tasks=6 16/10/19 05:29:01 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=26528 16/10/19 05:29:01 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 16/10/19 05:29:01 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 16/10/19 05:29:01 INFO mapred.JobClient: Launched map tasks=1 16/10/19 05:29:01 INFO mapred.JobClient: Data-local map tasks=1 16/10/19 05:29:01 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=107381 16/10/19 05:29:01 INFO mapred.JobClient: File Output Format Counters 16/10/19 05:29:01 INFO mapred.JobClient: Bytes Written=0 16/10/19 05:29:01 INFO mapred.JobClient: FileSystemCounters 16/10/19 05:29:01 INFO mapred.JobClient: FILE_BYTES_READ=30 16/10/19 05:29:01 INFO mapred.JobClient: HDFS_BYTES_READ=1393 16/10/19 05:29:01 INFO mapred.JobClient: FILE_BYTES_WRITTEN=354256 16/10/19 05:29:01 INFO mapred.JobClient: File Input Format Counters 16/10/19 05:29:01 INFO mapred.JobClient: Bytes Read=1283 16/10/19 05:29:01 INFO mapred.JobClient: Map-Reduce Framework 16/10/19 05:29:01 INFO mapred.JobClient: Map output materialized bytes=30 16/10/19 05:29:01 INFO mapred.JobClient: Map input records=46 16/10/19 05:29:01 INFO mapred.JobClient: Reduce shuffle bytes=30 16/10/19 05:29:01 INFO mapred.JobClient: Spilled Records=0 16/10/19 05:29:01 INFO mapred.JobClient: Map output bytes=0 16/10/19 05:29:01 INFO mapred.JobClient: CPU time spent (ms)=16910 16/10/19 05:29:01 INFO mapred.JobClient: Total committed heap usage (bytes)=195301376 16/10/19 05:29:01 INFO mapred.JobClient: Combine input records=0 16/10/19 05:29:01 INFO mapred.JobClient: SPLIT_RAW_BYTES=110 16/10/19 05:29:01 INFO mapred.JobClient: Reduce input records=0 16/10/19 05:29:01 INFO mapred.JobClient: Reduce input groups=0 16/10/19 05:29:01 INFO mapred.JobClient: Combine output records=0 16/10/19 05:29:01 INFO mapred.JobClient: Physical memory (bytes) snapshot=331567104 16/10/19 05:29:01 INFO mapred.JobClient: Reduce output records=0 16/10/19 05:29:01 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2264113152 16/10/19 05:29:01 INFO mapred.JobClient: Map output records=0 ----------------------------------------------- yyyy-MM-dd HH:mm:ss\t温度 example:1995-10-10 10:10:10 6.54 这是数据源格式,我把 RunJob中的 int year=c.get(1); String hot=ss[1].substring(0,ss[1].lastIndexOf("°C")); KeyPari keyPari=new KeyPari(); keyPari.setYear(year); 中的°C改成了\n。 ----------------------------------------------- 代码和博文的一样,只删掉了MAP里面的IF判断和修改了输入输出路径。求前辈们指教一下为什么会这样,深表感激。

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

你以为这样写Java代码很6,但我看不懂

为了提高 Java 编程的技艺,我最近在 GitHub 上学习一些高手编写的代码。下面这一行代码(出自大牛之手)据说可以征服你的朋友,让他们觉得你写的代码很 6,来欣赏一下吧。 IntStream.range(1, 5).boxed().map(i -&gt; { System.out.print("Happy Birthday "); if (i == 3) return "dear NAME"...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

程序员写出这样的代码,能不挨骂吗?

当你换槽填坑时,面对一个新的环境。能够快速熟练,上手实现业务需求是关键。但是,哪些因素会影响你快速上手呢?是原有代码写的不够好?还是注释写的不够好?昨夜...

带了6个月的徒弟当了面试官,而身为高级工程师的我天天修Bug......

即将毕业的应届毕业生一枚,现在只拿到了两家offer,但最近听到一些消息,其中一个offer,我这个组据说客户很少,很有可能整组被裁掉。 想问大家: 如果我刚入职这个组就被裁了怎么办呢? 大家都是什么时候知道自己要被裁了的? 面试软技能指导: BQ/Project/Resume 试听内容: 除了刷题,还有哪些技能是拿到offer不可或缺的要素 如何提升面试软实力:简历, 行为面试,沟通能...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

Python爬虫,高清美图我全都要(彼岸桌面壁纸)

爬取彼岸桌面网站较为简单,用到了requests、lxml、Beautiful Soup4

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

Java岗开发3年,公司临时抽查算法,离职后这几题我记一辈子

前几天我们公司做了一件蠢事,非常非常愚蠢的事情。我原以为从学校出来之后,除了找工作有测试外,不会有任何与考试有关的事儿。 但是,天有不测风云,公司技术总监、人事总监两位大佬突然降临到我们事业线,叫上我老大,给我们组织了一场别开生面的“考试”。 那是一个风和日丽的下午,我翘着二郎腿,左手端着一杯卡布奇诺,右手抓着我的罗技鼠标,滚动着轮轴,穿梭在头条热点之间。 “淡黄的长裙~蓬松的头发...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

程序员是做全栈工程师好?还是专注一个领域好?

昨天,有位大一的同学私信我,说他要做全栈工程师。 我一听,这不害了孩子么,必须制止啊。 谁知,讲到最后,更确定了他做全栈程序员的梦想。 但凡做全栈工程师的,要么很惨,要么很牛! 但凡很牛的,绝不是一开始就是做全栈的! 全栈工程师听起来好听,但绝没有你想象的那么简单。 今天听我来给你唠,记得帮我点赞哦。 一、全栈工程师的职责 如果你学习编程的目的只是玩玩,那随意,想怎么学怎么学。...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

MySQL性能优化(五):为什么查询速度这么慢

前期回顾: MySQL性能优化(一):MySQL架构与核心问题 MySQL性能优化(二):选择优化的数据类型 MySQL性能优化(三):深入理解索引的这点事 MySQL性能优化(四):如何高效正确的使用索引 前面章节我们介绍了如何选择优化的数据类型、如何高效的使用索引,这些对于高性能的MySQL来说是必不可少的。但这些还完全不够,还需要合理的设计查询。如果查询写的很糟糕,即使表结构再合理、索引再...

立即提问
相关内容推荐