org.apache.hadoop.mapred.LocalJobRunner这个类在那个包里?

我在用sqoop1的javaapi操作,但是一执行命令就会报这个错,hadoop集群并不在运行程序的机器上,我是缺少这个类么,我翻了一般依赖里面确实没有

 Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapred.LocalJobRunner.<init>(Lorg/apache/hadoop/conf/Configuration;)V
    at org.apache.hadoop.mapred.LocalClientProtocolProvider.create(LocalClientProtocolProvider.java:42)
    at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:95)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
    at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1260)
    at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1256)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
    at org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1284)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:322)
    at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:299)
    at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:440)
    at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
    at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
    at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
    at com.mshuoke.datagw.impl.sqoop.SqoopTest.main(SqoopTest.java:52)
09:55:47.069 [Thread-4] DEBUG org.apache.hadoop.util.ShutdownHookManager - ShutdownHookManger complete shutdown.

1个回答

个人建议,可以在MAVEN中试一试

u011856283
你好杰米 项目jar重复,有两个org.apache.hadoop都有这个包,冲突造成的错误,去除了一个解决了
大约 2 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
eclipse运行hadoop mapreduce程序如下错误

2017-09-06 15:48:42,677 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1460)) - Starting flush of map output 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1482)) - Spilling map output 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1483)) - bufstart = 0; bufend = 108; bufvoid = 104857600 2017-09-06 15:48:42,686 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1485)) - kvstart = 26214396(104857584); kvend = 26214352(104857408); length = 45/6553600 2017-09-06 15:48:42,733 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1667)) - Finished spill 0 2017-09-06 15:48:42,743 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1038)) - Task:attempt_local1469942249_0001_m_000000_0 is done. And is in the process of committing 2017-09-06 15:48:42,751 INFO [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete. 2017-09-06 15:48:42,783 WARN [Thread-19] mapred.LocalJobRunner (LocalJobRunner.java:run(560)) - job_local1469942249_0001 java.lang.Exception: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()J at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.getRssMemorySize()J at org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:872) at org.apache.hadoop.mapred.Task.updateCounters(Task.java:1021) at org.apache.hadoop.mapred.Task.done(Task.java:1040) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-09-06 15:48:43,333 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1360)) - Job job_local1469942249_0001 running in uber mode : false 2017-09-06 15:48:43,335 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 0% reduce 0% 2017-09-06 15:48:43,337 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Job job_local1469942249_0001 failed with state FAILED due to: NA 2017-09-06 15:48:43,352 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1385)) - Counters: 10 Map-Reduce Framework Map input records=12 Map output records=12 Map output bytes=108 Map output materialized bytes=0 Input split bytes=104 Combine input records=0 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 File Input Format Counters Bytes Read=132 Finished

第一个hadoop程序就出现问题,就大佬帮忙看看。

如果程序打成jar包,用命令是可以运行的。但是在idea中就出现这样的错误 17/03/11 15:21:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Exception in thread "main" java.lang.VerifyError: Bad type on operand stack Exception Details: Location: org/apache/hadoop/mapred/JobTrackerInstrumentation.create(Lorg/apache/hadoop/mapred/JobTracker;Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/mapred/JobTrackerInstrumentation; @5: invokestatic Reason: Type 'org/apache/hadoop/metrics2/lib/DefaultMetricsSystem' (current frame, stack[2]) is not assignable to 'org/apache/hadoop/metrics2/MetricsSystem' Current Frame: bci: @5 flags: { } locals: { 'org/apache/hadoop/mapred/JobTracker', 'org/apache/hadoop/mapred/JobConf' } stack: { 'org/apache/hadoop/mapred/JobTracker', 'org/apache/hadoop/mapred/JobConf', 'org/apache/hadoop/metrics2/lib/DefaultMetricsSystem' } Bytecode: 0x0000000: 2a2b b200 03b8 0004 b0 at org.apache.hadoop.mapred.LocalJobRunner.<init>(LocalJobRunner.java:573) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:494) at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:479) at org.apache.hadoop.mapreduce.Job$1.run(Job.java:563) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.connect(Job.java:561) at org.apache.hadoop.mapreduce.Job.submit(Job.java:549) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at com.hadoop.maxtemperature.MaxTemperature.main(MaxTemperature.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.7.3</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-core</artifactId> <version>1.2.1</version> </dependency> </dependencies>

hadoop运行出如下错,郁闷死我了

Exception in thread "main" java.io.IOException: Cannot run program "chmod": CreateProcess error=2, ????????? at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at org.apache.hadoop.util.Shell.runCommand(Shell.java:149) at org.apache.hadoop.util.Shell.run(Shell.java:134) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286) at org.apache.hadoop.util.Shell.execCommand(Shell.java:354) at org.apache.hadoop.util.Shell.execCommand(Shell.java:337) at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:481) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:473) at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:280) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:372) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:465) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:372) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:208) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197) at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:92) at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800) at org.apache.hadoop.mapreduce.Job.submit(Job.java:432) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447) at cn.xyp.hadoop.test1.run(test1.java:63) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at cn.xyp.hadoop.test1.main(test1.java:21) Caused by: java.io.IOException: CreateProcess error=2, ????????? at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(ProcessImpl.java:81) at java.lang.ProcessImpl.start(ProcessImpl.java:30) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 24 more

用eclipse连接虚拟机hadoop集群执行MapReduce程序,但是报以下错误,请问如何解决?

# 说明:eclipse中关于hadoop的各项advanced parameter参数均已按配置文件进行配置。但在执行过程中还是报如下错误,请问如何解决。 # 执行日志: 2018-09-22 22:59:11,429 INFO [org.apache.commons.beanutils.FluentPropertyBeanIntrospector] - Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property. 2018-09-22 22:59:11,443 WARN [org.apache.hadoop.metrics2.impl.MetricsConfig] - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2018-09-22 22:59:11,496 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - Scheduled Metric snapshot period at 10 second(s). 2018-09-22 22:59:11,496 INFO [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] - JobTracker metrics system started 2018-09-22 22:59:20,863 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-09-22 22:59:20,879 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2018-09-22 22:59:20,928 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input files to process : 1 2018-09-22 22:59:20,984 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - number of splits:1 2018-09-22 22:59:21,072 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1513265977_0001 2018-09-22 22:59:21,074 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Executing with tokens: [] 2018-09-22 22:59:21,950 INFO [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Creating symlink: \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv <- G:\java_workspace\MapReduce_DEMO/movies.csv 2018-09-22 22:59:21,995 WARN [org.apache.hadoop.fs.FileUtil] - Command 'E:\hadoop-3.0.0\bin\winutils.exe symlink G:\java_workspace\MapReduce_DEMO\movies.csv \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv' failed 1 with: CreateSymbolicLink error (1314): ??????????? 2018-09-22 22:59:21,995 WARN [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Failed to create symlink: \tmp\hadoop-启政先生\mapred\local\1537628361150\movies.csv <- G:\java_workspace\MapReduce_DEMO/movies.csv 2018-09-22 22:59:21,996 INFO [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Localized hdfs://192.168.5.110:9000/temp/input/movies.csv as file:/tmp/hadoop-启政先生/mapred/local/1537628361150/movies.csv 2018-09-22 22:59:22,046 INFO [org.apache.hadoop.mapreduce.Job] - The url to track the job: http://localhost:8080/ 2018-09-22 22:59:22,047 INFO [org.apache.hadoop.mapreduce.Job] - Running job: job_local1513265977_0001 2018-09-22 22:59:22,047 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter set in config null 2018-09-22 22:59:22,051 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2018-09-22 22:59:22,051 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2018-09-22 22:59:22,052 INFO [org.apache.hadoop.mapred.LocalJobRunner] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-09-22 22:59:22,100 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Waiting for map tasks 2018-09-22 22:59:22,101 INFO [org.apache.hadoop.mapred.LocalJobRunner] - Starting task: attempt_local1513265977_0001_m_000000_0 2018-09-22 22:59:22,120 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - File Output Committer Algorithm version is 2 2018-09-22 22:59:22,120 INFO [org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter] - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2018-09-22 22:59:22,128 INFO [org.apache.hadoop.yarn.util.ProcfsBasedProcessTree] - ProcfsBasedProcessTree currently is supported only on Linux. 2018-09-22 22:59:22,169 INFO [org.apache.hadoop.mapred.Task] - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@7ef907ef 2018-09-22 22:59:22,172 INFO [org.apache.hadoop.mapred.MapTask] - Processing split: hdfs://192.168.5.110:9000/temp/input/ratings.csv:0+2438233 ----------cachePath=/temp/input/movies.csv---------- 2018-09-22 22:59:22,226 INFO [org.apache.hadoop.mapred.LocalJobRunner] - map task executor complete. 2018-09-22 22:59:22,233 WARN [org.apache.hadoop.mapred.LocalJobRunner] - job_local1513265977_0001 java.lang.Exception: java.io.FileNotFoundException: \temp\input\movies.csv (系统找不到指定的路径。) at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) Caused by: java.io.FileNotFoundException: \temp\input\movies.csv (系统找不到指定的路径。) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileReader.<init>(Unknown Source) at MovieJoinExercise1.MovieJoin$MovieJoinMapper.setup(MovieJoin.java:79) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 2018-09-22 22:59:23,051 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1513265977_0001 running in uber mode : false 2018-09-22 22:59:23,052 INFO [org.apache.hadoop.mapreduce.Job] - map 0% reduce 0% 2018-09-22 22:59:23,053 INFO [org.apache.hadoop.mapreduce.Job] - Job job_local1513265977_0001 failed with state FAILED due to: NA 2018-09-22 22:59:23,058 INFO [org.apache.hadoop.mapreduce.Job] - Counters: 0

Hadoop序列化问题,实现WritableComparable,readFields报错EOFException

``` public class MyKey implements WritableComparable<MyKey> { //flag == 1 : user //flag == 0 : shopping private Integer flag; private Integer u_id; private Integer s_id; private Integer s_u_id; private String u_info; private String s_info; @Override public int compareTo(MyKey o) { if (flag.equals(1)){ //user return u_id - o.u_id; }else { //shopping return s_id - o.s_id; } } @Override public void write(DataOutput out) throws IOException { out.writeInt(flag); out.writeInt(u_id); out.writeInt(s_id); out.writeInt(s_u_id); out.writeUTF(u_info); out.writeUTF(s_info); } @Override public void readFields(DataInput in) throws IOException { flag = in.readInt(); u_id = in.readInt(); s_id = in.readInt(); s_u_id = in.readInt(); u_info = in.readUTF(); s_info = in.readUTF(); } } ``` 报错异常 2018-10-08 19:55:15,246 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2018-10-08 19:55:15,250 INFO mapred.LocalJobRunner: reduce task executor complete. 2018-10-08 19:55:15,253 WARN mapred.LocalJobRunner: job_local85671337_0001 java.lang.Exception: java.lang.RuntimeException: java.io.EOFException at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:559) Caused by: java.lang.RuntimeException: java.io.EOFException at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:165) at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:158) at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121) at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:302) at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390) at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:347) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at sortjoin.MyKey.readFields(MyKey.java:43) at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:158) ... 12 more 2018-10-08 19:55:15,962 INFO mapreduce.Job: Job job_local85671337_0001 running in uber mode : false 2018-10-08 19:55:15,964 INFO mapreduce.Job: map 100% reduce 0%

急!!!hadoop下eclipse程序运行出现如下问题,怎么办

15/04/09 14:47:42 INFO util.NativeCodeLoader: Loaded the native-hadoop library 15/04/09 14:47:42 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 15/04/09 14:47:42 WARN snappy.LoadSnappy: Snappy native library not loaded 15/04/09 14:47:42 INFO mapred.FileInputFormat: Total input paths to process : 1 15/04/09 14:47:43 INFO mapred.JobClient: Running job: job_local_0001 15/04/09 14:47:44 INFO util.ProcessTree: setsid exited with exit code 0 15/04/09 14:47:44 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1e29f45 15/04/09 14:47:44 INFO mapred.MapTask: numReduceTasks: 1 15/04/09 14:47:44 INFO mapred.MapTask: io.sort.mb = 100 15/04/09 14:47:44 INFO mapred.JobClient: map 0% reduce 0% 15/04/09 14:47:46 INFO mapred.MapTask: data buffer = 79691776/99614720 15/04/09 14:47:46 INFO mapred.MapTask: record buffer = 262144/327680 null 15/04/09 14:47:46 WARN mapred.LocalJobRunner: job_local_0001 java.lang.NullPointerException at java.util.StringTokenizer.<init>(StringTokenizer.java:199) at java.util.StringTokenizer.<init>(StringTokenizer.java:236) at word.word$Map.map(word.java:124) at word.word$Map.map(word.java:1) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212) 15/04/09 14:47:47 INFO mapred.JobClient: Job complete: job_local_0001 15/04/09 14:47:47 INFO mapred.JobClient: Counters: 0 15/04/09 14:47:47 INFO mapred.JobClient: Job Failed: NA Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265) at word.word.main(word.java:191)

在eclipse运行hadoop mapreduce例子报错

在终端运行hadoop带的例子正常,hadoop节点正常,错误如下 17/09/05 20:20:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/09/05 20:20:16 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 17/09/05 20:20:16 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= Exception in thread "main" java.net.ConnectException: Call From master/192.168.1.110 to localhost:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1479) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426) at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at mapreduce.Temperature.main(Temperature.java:202) Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528) at org.apache.hadoop.ipc.Client.call(Client.java:1451) ... 28 more

MapReducer 写入到数据库 报错

## 【 DBUserWritable 类 】 package org.neworigin.com.Database; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.mapreduce.lib.db.DBWritable; public class DBUserWritable implements DBWritable,WritableComparable{ private String name=""; private String sex=""; private int age=0; private int num=0; private String department=""; private String tables=""; @Override public String toString() { return "DBUserWritable [name=" + name + ", sex=" + sex + ", age=" + age + ", department=" + department + "]"; } public DBUserWritable(DBUserWritable d){ this.name=d.getName(); this.sex=d.getSex(); this.age=d.getAge(); this.num=d.getNum(); this.department=d.getDepartment(); this.tables=d.getTables(); } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getSex() { return sex; } public void setSex(String sex) { this.sex = sex; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public int getNum() { return num; } public void setNum(int num) { this.num = num; } public String getDepartment() { return department; } public void setDepartment(String department) { this.department = department; } public String getTables() { return tables; } public void setTables(String tables) { this.tables = tables; } public DBUserWritable(String name, String sex, int age, int num, String department, String tables) { super(); this.name = name; this.sex = sex; this.age = age; this.num = num; this.department = department; this.tables = tables; } public DBUserWritable() { super(); // TODO Auto-generated constructor stub } public void write(DataOutput out) throws IOException { // TODO Auto-generated method stub out.writeUTF(name); out.writeUTF(sex); out.writeInt(age); out.writeInt(num); out.writeUTF(department); out.writeUTF(tables); } public void readFields(DataInput in) throws IOException { // TODO Auto-generated method stub name = in.readUTF(); sex=in.readUTF(); age=in.readInt(); num=in.readInt(); department=in.readUTF(); tables=in.readUTF(); } public int compareTo(Object o) { // TODO Auto-generated method stub return 0; } public void write(PreparedStatement statement) throws SQLException { // TODO Auto-generated method stub statement.setString(1, this.getName()); statement.setString(2, this.getSex()); statement.setInt(3, this.getAge()); statement.setString(4, this.getDepartment()); } public void readFields(ResultSet resultSet) throws SQLException { // TODO Auto-generated method stub this.name=resultSet.getString(1); this.sex=resultSet.getString(2); this.age=resultSet.getInt(3); this.department=resultSet.getString(4); } } ## 【mapper】 package org.neworigin.com.Database; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class UserDBMapper extends Mapper<LongWritable, Text, Text, DBUserWritable> { DBUserWritable DBuser= new DBUserWritable(); @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, DBUserWritable>.Context context) throws IOException, InterruptedException { String[] values=value.toString().split(" "); if(values.length==4){ DBuser.setName(values[0]); DBuser.setSex(values[1]); DBuser.setAge(Integer.parseInt(values[2])); DBuser.setNum(Integer.parseInt(values[3])); DBuser.setTables("t1"); System.out.println("mapper---t1---------------"+DBuser); context.write(new Text(values[3]),DBuser); } if(values.length==2){ DBuser.setNum(Integer.parseInt(values[0])); DBuser.setDepartment(values[1]); DBuser.setTables("t2"); context.write(new Text(values[0]),DBuser); //System.out.println("mapper --t2"+"--"+values[0]+"----"+DBuser); } } } ## 【reducer 】 package org.neworigin.com.Database; import java.io.IOException; import java.util.LinkedList; import java.util.List; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class UserDBReducer extends Reducer<Text, DBUserWritable,NullWritable,DBUserWritable> { // public DBUserWritable db= new DBUserWritable(); @Override protected void reduce(Text k2, Iterable<DBUserWritable> v2, Reducer<Text, DBUserWritable, NullWritable,DBUserWritable>.Context context) throws IOException, InterruptedException { String Name=""; List<DBUserWritable> list=new LinkedList<DBUserWritable>(); for(DBUserWritable val : v2){ list.add(new DBUserWritable(val));//new 一个对象 给list // System.out.println("[table]"+val.getTables()+"----key"+k2+"---"+val); if(val.getTables().equals("t2")){ Name=val.getDepartment(); } } //键是 num for(DBUserWritable join : list){ System.out.println("[table]"+join.getTables()+"----key"+k2+"---"+join); if(join.getTables().equals("t1")){ join.setDepartment(Name); System.out.println("db-----"+join); context.write(NullWritable.get(), join); } } } } ## 【app】 package org.neworigin.com.Database; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.db.DBConfiguration; import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class UserDBAPP { public static void main(String[] args) throws Exception, URISyntaxException { // TODO Auto-generated method stub String INPUT_PATH="file:///E:/BigData_eclipse_database/Database/data/table1"; String INPUT_PATH1="file:///E:/BigData_eclipse_database/Database/data/table2"; // String OUTPUT_PARH="file:///E:/BigData_eclipse_database/Database/data/output"; Configuration conf = new Configuration(); // FileSystem fs=FileSystem.get(new URI(OUTPUT_PARH),conf); // if(fs.exists(new Path(OUTPUT_PARH))){ // fs.delete(new Path(OUTPUT_PARH)); // } Job job = new Job(conf,"mydb"); //设置数据库配置 DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://localhost/hadoop", "root", "123456"); FileInputFormat.addInputPaths(job,INPUT_PATH); FileInputFormat.addInputPaths(job,INPUT_PATH1); job.setMapperClass(UserDBMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(DBUserWritable.class); job.setReducerClass(UserDBReducer.class); job.setOutputKeyClass(NullWritable.class); job.setOutputValueClass(DBUserWritable.class); // FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PARH)); //设置输出路径 DBOutputFormat.setOutput(job,"user_tables", "name","sex","age","department"); job.setOutputFormatClass(DBOutputFormat.class); boolean re = job.waitForCompletion(true); System.out.println(re); } } 【报错】ps 表链接 ,写到本地没问题 写到数据库 就报错; 17/11/10 11:39:11 WARN output.FileOutputCommitter: Output Path is null in cleanupJob() 17/11/10 11:39:11 WARN mapred.LocalJobRunner: job_local1812680657_0001 java.lang.Exception: java.io.IOException at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) Caused by: java.io.IOException at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:541) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 running in uber mode : false 17/11/10 11:39:12 INFO mapreduce.Job: map 100% reduce 0% 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 failed with state FAILED due to: NA 17/11/10 11:39:12 INFO mapreduce.Job: Counters: 35

按照步骤配置的,找了好几天不知道是哪里出现了问题

[Fatal Error] :79:136: Character reference "&#24" is an invalid XML character. Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException; lineNumber: 79; columnNumber: 136; Character reference "&#24" is an invalid XML character. at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1168) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1040) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980) at org.apache.hadoop.conf.Configuration.get(Configuration.java:382) at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:1662) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:215) at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:93) at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249) at org.apache.nutch.crawl.Injector.inject(Injector.java:217) at org.apache.nutch.crawl.Crawl.main(Crawl.java:124) Caused by: org.xml.sax.SAXParseException; lineNumber: 79; columnNumber: 136; Character reference "&#24" is an invalid XML character. at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:121) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1092) ... 12 more

sqoop命令可以導入成功,但是用java調用sqoop數據卻沒有導入,表能創建成功,但是沒有數據。現在不知道從哪裡找原因了,求助,感謝

1,使用sqoop將informix中的數據導入到hadoop中, 可以導入成功,在hive中可以查詢表的數據量信息。 2.使用java調用sqoop,使用的是相同的命令參數,Sqoop.runSqoop(sqoop, expandArguments) 返回的結果是0,在eclipse中,顯示的結果好像也是成功的,java中執行完成后,可以在hive中查到對應的表信息,但是卻沒有數據,現在不知道從哪裡找原因了,求助,感謝!(hadoop、hive,sqoop都是windows環境下) ECLIPSE執行的全部信息如下: 2020-06-29 11:17:39,900 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream 2020-06-29 11:17:39,907 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream expandArguments 成功! 2020-06-29T11:17:40,063 INFO [main] org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS 2020-06-29T11:17:40,067 WARN [main] org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration. 2020-06-29T11:17:40,092 INFO [main] org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.6 2020-06-29T11:17:40,139 INFO [main] org.apache.sqoop.tool.BaseSqoopTool - Using Hive-specific delimiters for output. You can override 2020-06-29T11:17:40,139 INFO [main] org.apache.sqoop.tool.BaseSqoopTool - delimiters with --fields-terminated-by, etc. 2020-06-29T11:17:40,150 WARN [main] org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration. 2020-06-29T11:17:40,181 WARN [main] org.apache.sqoop.ConnFactory - Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time. 2020-06-29T11:17:40,189 INFO [main] org.apache.sqoop.manager.SqlManager - Using default fetchSize of 1000 2020-06-29T11:17:40,193 INFO [main] org.apache.sqoop.tool.CodeGenTool - Beginning code generation 2020-06-29T11:17:40,566 INFO [main] org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM pmc_file AS t WHERE 1=0 2020-06-29T11:17:40,575 INFO [main] org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM pmc_file AS t WHERE 1=0 2020-06-29T11:17:40,603 INFO [main] org.apache.sqoop.orm.CompilationManager - $HADOOP_MAPRED_HOME is not set Note: \tmp\sqoop-機器用戶名稱\compile\8dabd1b206bb53c6f69beab4e93619b6\pmc_file.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 2020-06-29T11:17:42,545 INFO [main] org.apache.sqoop.orm.CompilationManager - Writing jar file: \tmp\sqoop-機器用戶名稱\compile\8dabd1b206bb53c6f69beab4e93619b6\pmc_file.jar 2020-06-29T11:17:42,621 INFO [main] org.apache.sqoop.mapreduce.ImportJobBase - Beginning import of pmc_file 2020-06-29T11:17:42,807 INFO [main] org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar 2020-06-29T11:17:42,814 INFO [main] org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM pmc_file AS t WHERE 1=0 2020-06-29T11:17:43,509 INFO [main] org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2020-06-29T11:17:43,527 INFO [main] org.apache.hadoop.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id 2020-06-29T11:17:43,528 INFO [main] org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId= 2020-06-29T11:17:45,103 INFO [main] org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation 2020-06-29T11:17:45,126 INFO [main] org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1 2020-06-29T11:17:45,135 INFO [main] org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS 2020-06-29T11:17:45,193 INFO [main] org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local528795584_0001 2020-06-29T11:17:46,093 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665311\hadoop-common-2.7.7.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/hadoop-common-2.7.7.jar 2020-06-29T11:17:46,294 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/hadoop-common-2.7.7.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665311/hadoop-common-2.7.7.jar 2020-06-29T11:17:46,294 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665312\mysql-connector-java-8.0.20.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/mysql-connector-java-8.0.20.jar 2020-06-29T11:17:46,344 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/mysql-connector-java-8.0.20.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665312/mysql-connector-java-8.0.20.jar 2020-06-29T11:17:46,344 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665313\sqoop-1.4.6.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/sqoop-1.4.6.jar 2020-06-29T11:17:46,396 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/sqoop-1.4.6.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665313/sqoop-1.4.6.jar 2020-06-29T11:17:46,396 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665314\hive-exec-2.3.7.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/hive-exec-2.3.7.jar 2020-06-29T11:17:46,447 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/hive-exec-2.3.7.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665314/hive-exec-2.3.7.jar 2020-06-29T11:17:46,447 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665315\ant-contrib-1.0b3.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/ant-contrib-1.0b3.jar 2020-06-29T11:17:46,503 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/ant-contrib-1.0b3.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665315/ant-contrib-1.0b3.jar 2020-06-29T11:17:46,503 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665316\libthrift-0.9.3.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/libthrift-0.9.3.jar 2020-06-29T11:17:46,555 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/libthrift-0.9.3.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665316/libthrift-0.9.3.jar 2020-06-29T11:17:46,556 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665317\mysql-connector-java-5.0.8-bin.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/mysql-connector-java-5.0.8-bin.jar 2020-06-29T11:17:46,606 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/mysql-connector-java-5.0.8-bin.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665317/mysql-connector-java-5.0.8-bin.jar 2020-06-29T11:17:46,606 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665318\ifxjdbc.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/ifxjdbc.jar 2020-06-29T11:17:46,660 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/ifxjdbc.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665318/ifxjdbc.jar 2020-06-29T11:17:46,660 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-機器用戶名稱\mapred\local\1593400665319\ant-eclipse-1.0-jvm1.2.jar <- D:\WorkFiles\code\JavaPractices\myhadoop/ant-eclipse-1.0-jvm1.2.jar 2020-06-29T11:17:46,714 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - Localized file:/D:/hadoop/job/sqoop-1.4.7/lib/ant-eclipse-1.0-jvm1.2.jar as file:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665319/ant-eclipse-1.0-jvm1.2.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665311/hadoop-common-2.7.7.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665312/mysql-connector-java-8.0.20.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665313/sqoop-1.4.6.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665314/hive-exec-2.3.7.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665315/ant-contrib-1.0b3.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665316/libthrift-0.9.3.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665317/mysql-connector-java-5.0.8-bin.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665318/ifxjdbc.jar 2020-06-29T11:17:46,766 INFO [main] org.apache.hadoop.mapred.LocalDistributedCacheManager - file:/D:/tmp/hadoop-機器用戶名稱/mapred/local/1593400665319/ant-eclipse-1.0-jvm1.2.jar 2020-06-29T11:17:46,772 INFO [main] org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/ 2020-06-29T11:17:46,773 INFO [main] org.apache.hadoop.mapreduce.Job - Running job: job_local528795584_0001 2020-06-29T11:17:46,775 INFO [Thread-18] org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null 2020-06-29T11:17:46,794 INFO [Thread-18] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1 2020-06-29T11:17:46,796 INFO [Thread-18] org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2020-06-29T11:17:46,939 INFO [Thread-18] org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks 2020-06-29T11:17:46,940 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local528795584_0001_m_000000_0 2020-06-29T11:17:46,964 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1 2020-06-29T11:17:46,971 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree - ProcfsBasedProcessTree currently is supported only on Linux. 2020-06-29T11:17:47,021 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@1a1ab93f 2020-06-29T11:17:47,041 INFO [LocalJobRunner Map Task Executor #0] org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation 2020-06-29T11:17:47,047 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Processing split: 1=1 AND 1=1 2020-06-29T11:17:47,311 INFO [LocalJobRunner Map Task Executor #0] org.apache.sqoop.mapreduce.db.DBRecordReader - Working on split: 1=1 AND 1=1 2020-06-29T11:17:47,316 INFO [LocalJobRunner Map Task Executor #0] org.apache.sqoop.mapreduce.db.DBRecordReader - Executing query: SELECT pmc01, pmc02, pmc03, pmc04, pmc05, pmc06, pmc07, pmc081, pmc082, pmc091, pmc092, pmc093, pmc094, pmc095, pmc10, pmc11, pmc12, pmc13, pmc14, pmc15, pmc16, pmc17, pmc18, pmc19, pmc20, pmc21, pmc22, pmc23, pmc24, pmc25, pmc26, pmc27, pmc28, pmc30, pmc40, pmc41, pmc42, pmc43, pmc44, pmc45, pmc46, pmc47, pmc48, pmc49, pmc50, pmc51, pmc52, pmc53, pmc54, pmc55, pmc56, pmc901, pmc902, pmc903, pmc904, pmc905, pmc906, pmc907, pmc908, pmc909, pmc910, pmc911, pmc912, pmc913, pmc914, pmc915, pmc916, pmc917, pmc918, pmcacti, pmcuser, pmcgrup, pmcmodu, pmcdate FROM pmc_file AS pmc_file WHERE ( 1=1 ) AND ( 1=1 ) 2020-06-29T11:17:47,790 INFO [main] org.apache.hadoop.mapreduce.Job - Job job_local528795584_0001 running in uber mode : false 2020-06-29T11:17:47,791 INFO [main] org.apache.hadoop.mapreduce.Job - map 0% reduce 0% 2020-06-29T11:17:52,980 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:17:55,980 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:17:58,981 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:01,982 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:04,983 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:07,983 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:10,984 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:13,984 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:16,987 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:19,989 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:22,993 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:25,994 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:28,994 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:31,994 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:34,994 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:37,995 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:40,995 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:41,050 INFO [Thread-67] org.apache.sqoop.mapreduce.AutoProgressMapper - Auto-progress thread is finished. keepGoing=false 2020-06-29T11:18:41,051 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:41,126 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task:attempt_local528795584_0001_m_000000_0 is done. And is in the process of committing 2020-06-29T11:18:41,130 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - map > map 2020-06-29T11:18:41,130 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task attempt_local528795584_0001_m_000000_0 is allowed to commit now 2020-06-29T11:18:41,191 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local528795584_0001_m_000000_0' to hdfs://localhost:9000/user/機器用戶名稱/abc/_temporary/0/task_local528795584_0001_m_000000 2020-06-29T11:18:41,192 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - map 2020-06-29T11:18:41,192 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task 'attempt_local528795584_0001_m_000000_0' done. 2020-06-29T11:18:41,195 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Final Counters for attempt_local528795584_0001_m_000000_0: Counters: 20 File System Counters FILE: Number of bytes read=42995830 FILE: Number of bytes written=43639176 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=0 HDFS: Number of bytes written=95659129 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Map-Reduce Framework Map input records=119212 Map output records=119212 Input split bytes=87 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=24 Total committed heap usage (bytes)=429916160 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=95659129 2020-06-29T11:18:41,195 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local528795584_0001_m_000000_0 2020-06-29T11:18:41,195 INFO [Thread-18] org.apache.hadoop.mapred.LocalJobRunner - map task executor complete. 2020-06-29T11:18:41,825 INFO [main] org.apache.hadoop.mapreduce.Job - map 100% reduce 0% 2020-06-29T11:18:41,826 INFO [main] org.apache.hadoop.mapreduce.Job - Job job_local528795584_0001 completed successfully 2020-06-29T11:18:41,853 INFO [main] org.apache.hadoop.mapreduce.Job - Counters: 20 File System Counters FILE: Number of bytes read=42995830 FILE: Number of bytes written=43639176 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=0 HDFS: Number of bytes written=95659129 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Map-Reduce Framework Map input records=119212 Map output records=119212 Input split bytes=87 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=24 Total committed heap usage (bytes)=429916160 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=95659129 2020-06-29T11:18:41,854 INFO [main] org.apache.sqoop.mapreduce.ImportJobBase - Transferred 91.2277 MB in 58.3388 seconds (1.5638 MB/sec) 2020-06-29T11:18:41,857 INFO [main] org.apache.sqoop.mapreduce.ImportJobBase - Retrieved 119212 records. 2020-06-29T11:18:41,893 INFO [main] org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM pmc_file AS t WHERE 1=0 2020-06-29T11:18:41,907 INFO [main] org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM pmc_file AS t WHERE 1=0 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmc40 had to be cast to a less precise type in Hive 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmc41 had to be cast to a less precise type in Hive 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmc42 had to be cast to a less precise type in Hive 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmc43 had to be cast to a less precise type in Hive 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmc44 had to be cast to a less precise type in Hive 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmc45 had to be cast to a less precise type in Hive 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmc46 had to be cast to a less precise type in Hive 2020-06-29T11:18:41,913 WARN [main] org.apache.sqoop.hive.TableDefWriter - Column pmcdate had to be cast to a less precise type in Hive 2020-06-29T11:18:41,930 INFO [main] org.apache.sqoop.hive.HiveImport - Loading uploaded data into Hive 2020-06-29T11:18:42,041 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Found configuration file file:/D:/hadoop/job/apache-hive-2.3.7-bin/conf/hive-site.xml 2020-06-29 11:18:42,164 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream 2020-06-29 11:18:43,159 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream 2020-06-29 11:18:43,160 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream Logging initialized using configuration in jar:file:/D:/WorkFiles/code/JavaPractices/myhadoop/lib/hive-common-2.3.7.jar!/hive-log4j2.properties Async: true 2020-06-29T11:18:43,447 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created HDFS directory: /tmp/機器用戶名稱/fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 2020-06-29T11:18:43,448 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created local directory: D:/hadoop/job/apache-hive-2.3.7-bin/my_hive/scratch_dir/fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 2020-06-29T11:18:43,449 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created HDFS directory: /tmp/機器用戶名稱/fadb6ae0-af37-4aa2-be7f-2e8b43ecb727/_tmp_space.db 2020-06-29T11:18:43,455 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 2020-06-29T11:18:43,459 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Updating thread name to fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main 2020-06-29T11:18:43,460 INFO [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. 2020-06-29T11:18:53,696 WARN [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.ql.session.SessionState - METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory. OK Time taken: 12.307 seconds 2020-06-29T11:18:55,797 INFO [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] CliDriver - Time taken: 12.307 seconds 2020-06-29T11:18:55,797 INFO [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 2020-06-29T11:18:55,798 INFO [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.ql.session.SessionState - Resetting thread name to main 2020-06-29T11:18:55,798 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 2020-06-29T11:18:55,798 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Updating thread name to fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main Loading data to table default.my_pmc 2020-06-29T11:18:56,086 WARN [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist -chgrp: 'CFAG\Domain Users' does not match expected pattern for group Usage: hadoop fs [generic options] -chgrp [-R] GROUP PATH... 2020-06-29T11:18:59,067 WARN [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist 2020-06-29T11:18:59,303 WARN [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist OK 2020-06-29T11:18:59,502 INFO [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] CliDriver - Time taken: 3.704 seconds 2020-06-29T11:18:59,502 INFO [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 2020-06-29T11:18:59,502 INFO [fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 main] org.apache.hadoop.hive.ql.session.SessionState - Resetting thread name to main Time taken: 3.704 seconds 2020-06-29T11:18:59,503 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 2020-06-29T11:18:59,507 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Deleted directory: /tmp/機器用戶名稱/fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 on fs with scheme file 2020-06-29T11:18:59,509 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Deleted directory: D:/hadoop/job/apache-hive-2.3.7-bin/my_hive/scratch_dir/fadb6ae0-af37-4aa2-be7f-2e8b43ecb727 on fs with scheme file 2020-06-29T11:18:59,523 INFO [main] org.apache.sqoop.hive.HiveImport - Hive import complete. 2020-06-29T11:18:59,527 INFO [main] org.apache.sqoop.hive.HiveImport - Export directory is contains the _SUCCESS file only, removing the directory. 0

mapreduce,貌似是overide,求助

报错: zxy@zxy-virtual-machine:/usr/hadoop/hadoop-2.4.0$ hadoop jar WordCount.jar WordCount /input /output 15/04/23 07:12:49 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 15/04/23 07:12:49 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 15/04/23 07:12:50 INFO input.FileInputFormat: Total input paths to process : 1 15/04/23 07:12:50 INFO mapreduce.JobSubmitter: number of splits:1 15/04/23 07:12:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local168934583_0001 15/04/23 07:12:51 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/staging/zxy168934583/.staging/job_local168934583_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 15/04/23 07:12:51 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/staging/zxy168934583/.staging/job_local168934583_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 15/04/23 07:12:52 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/local/localRunner/zxy/job_local168934583_0001/job_local168934583_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 15/04/23 07:12:52 WARN conf.Configuration: file:/home/zxy/hadoop_tmp/mapred/local/localRunner/zxy/job_local168934583_0001/job_local168934583_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 15/04/23 07:12:52 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 15/04/23 07:12:52 INFO mapreduce.Job: Running job: job_local168934583_0001 15/04/23 07:12:52 INFO mapred.LocalJobRunner: OutputCommitter set in config null 15/04/23 07:12:52 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 15/04/23 07:12:52 INFO mapred.LocalJobRunner: Waiting for map tasks 15/04/23 07:12:52 INFO mapred.LocalJobRunner: Starting task: attempt_local168934583_0001_m_000000_0 15/04/23 07:12:52 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 15/04/23 07:12:52 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/input/data.txt:0+57 15/04/23 07:12:52 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 15/04/23 07:12:53 INFO mapreduce.Job: Job job_local168934583_0001 running in uber mode : false 15/04/23 07:12:53 INFO mapreduce.Job: map 0% reduce 0% 15/04/23 07:12:55 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 15/04/23 07:12:55 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 15/04/23 07:12:55 INFO mapred.MapTask: soft limit at 83886080 15/04/23 07:12:55 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 15/04/23 07:12:55 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 15/04/23 07:12:55 INFO mapred.MapTask: Starting flush of map output 15/04/23 07:12:55 INFO mapred.MapTask: Spilling map output 15/04/23 07:12:55 INFO mapred.MapTask: bufstart = 0; bufend = 36; bufvoid = 104857600 15/04/23 07:12:55 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600 15/04/23 07:12:55 INFO mapred.MapTask: Finished spill 0 15/04/23 07:12:55 INFO mapred.LocalJobRunner: map task executor complete. 15/04/23 07:12:55 WARN mapred.LocalJobRunner: job_local168934583_0001 java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 3 at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.ArrayIndexOutOfBoundsException: 3 at WordCount$TokenizerMapper.map(WordCount.java:35) at WordCount$TokenizerMapper.map(WordCount.java:1) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 15/04/23 07:12:55 INFO mapreduce.Job: Job job_local168934583_0001 failed with state FAILED due to: NA 15/04/23 07:12:55 INFO mapreduce.Job: Counters: 0 zxy@zxy-virtual-machine:/usr/hadoop/hadoop-2.4.0$ hadoop fs -ls /output

nutch:unable to create new native thread

使用nutch爬取时,报错,以下是日志,哪位大神给我解答下 java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:597) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at org.apache.nutch.parse.ParseUtil.runParser(ParseUtil.java:159) at org.apache.nutch.parse.ParseUtil.parse(ParseUtil.java:93) at org.apache.nutch.parse.ParseSegment.map(ParseSegment.java:97) at org.apache.nutch.parse.ParseSegment.map(ParseSegment.java:44) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)

Eclipse上运行MapReduce程序时,win10系统用户名中间有空格导致tmp文件生成&读取错误

报错如下: ``` SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.10.0/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/slf4j/slf4j-simple/1.6.6/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/D:/Java/Dev/Maven/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 2019-09-05 10:27:02,488 WARN [main] impl.MetricsConfig (MetricsConfig.java:134) - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2019-09-05 10:27:04,715 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:147) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2019-09-05 10:27:04,743 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:480) - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2019-09-05 10:27:10,228 WARN [pool-8-thread-1] impl.MetricsSystemImpl (MetricsSystemImpl.java:151) - JobTracker metrics system already initialized! 2019-09-05 10:27:10,326 WARN [Thread-6] mapred.LocalJobRunner$Job (LocalJobRunner.java:590) - job_local64686135_0001 java.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1 at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) ~[hadoop-mapreduce-client-common-3.1.2.jar:?] at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:559) [hadoop-mapreduce-client-common-3.1.2.jar:?] Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:377) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:347) ~[hadoop-mapreduce-client-common-3.1.2.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_221] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_221] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_221] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_221] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_221] Caused by: java.io.FileNotFoundException: File D:/tmp/hadoop-William%20Scott/mapred/local/localRunner/icss/jobcache/job_local64686135_0001/attempt_local64686135_0001_m_000000_0/output/file.out.index does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:211) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtils.java:152) ~[hadoop-common-3.1.2.jar:?] at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:71) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:57) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(LocalFetcher.java:125) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetcher.java:103) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher.java:86) ~[hadoop-mapreduce-client-core-3.1.2.jar:?] ``` 目前的情况是win10是用微软账号的登录的,姓名之间会自动生成一个空格,不是太方便更改账户。Hadoop运行环境是放在D盘的,但不是根目录。 请问有没有办法让本地的tmp文件换个地方生成,或者更改hadoop-William%20Scott文件夹的名字。 谢谢。

nutch自定义要抓取内容

刚接触nutch,环境是VirtualBox虚拟机安装centos6.5 64位 ,在CentOS下使用svn从官网检出nutch 2.3 。初步的需求就是,根据我自定义的url,通过输入某些关键词(或html标签、或者正则表达式),来把匹配的网页内容抓取下来。后续再进行分析(后话) 我还在学习中,发现nutch2.3版本中,已经用bin/crawl命令取代了老版本的 bin/nutch crawl ,参数列表几乎完全都变了. 我尝试了如下操作: 这是2.3版本的命令参数: bin/crawl Usage: crawl <seedDir> <crawlID> [<solrUrl>] <numberOfRounds> 然后我使用:bin/crawl urls/ MyFirstCrawl http://localhost:8080/solr 6 其中: urls:是我建立的抓取文件所在的上级目录(结构:urls/urls.txt,urls.txt中存了要抓取的页面url) MyFirstCrawl:自定义的crawl名称 solrUrl:这个地址是随便填写的 然后如下错误: /home/release-2.3/runtime/local/bin/nutch generate -D mapred.reduce.tasks=2 -D mapred.child.java.opts=-Xmx1000m -D mapred.reduce.tasks.speculative.execution=false -D mapred.map.tasks.speculative.execution=false -D mapred.compress.map.output=true -topN 50000 -noNorm -noFilter -adddays 0 -crawlId MyFirstCrawl -batchId 1455677914-18990 GeneratorJob: starting at 2016-02-17 10:58:35 GeneratorJob: Selecting best-scoring urls due for fetch. GeneratorJob: starting GeneratorJob: filtering: false GeneratorJob: normalizing: false GeneratorJob: topN: 50000 java.util.NoSuchElementException at java.util.TreeMap.key(TreeMap.java:1221) at java.util.TreeMap.firstKey(TreeMap.java:285) at org.apache.gora.memory.store.MemStore.execute(MemStore.java:125) at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:73) at org.apache.gora.mapreduce.GoraRecordReader.executeQuery(GoraRecordReader.java:68) at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:110) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:531) at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) GeneratorJob: finished at 2016-02-17 10:58:38, time elapsed: 00:00:02 GeneratorJob: generated batch id: 1455677914-18990 containing 0 URLs Generate returned 1 (no new segments created) Escaping loop: no more URLs to fetch now 请问这个错误代表什么?我该怎么样调整。 另外,如果想要完成我开头说的抓取需求 我该做怎样的配置才能实现?

Nutch+MongoDB+ElasticSearch+Kibana搭建inject操作异常

linux搭建Nutch+MongoDB+ElasticSearch+Kibana环境环境,nutch是apache-nutch-2.3.1-src.tar.gz源码编译的。 参考:http://blog.csdn.net/github_27609763/article/details/50597427进行搭建, 但是执行到./bin/nutch inject urls/报错,跪求大神指教 其中配置如下 nutch-site.xml ``` <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>storage.data.store.class</name> <value>org.apache.gora.mongodb.store.MongoStore</value> <description>Default class for storing data</description> </property> <property> <name>http.agent.name</name> <value>Hist Crawler</value> </property> <property> <name>plugin.includes</name> <value>protocol-(httphttpclient)urlfilter-regexindex-(basicmore)query-(basicsiteurllang)indexer-elasticnutch-extensionpointsparse-(texthtmlmsexcelmswordmspowerpointpdf)summary-basicscoring-opicurlnormalizer-(passregexbasic)parse-(htmltikametatags)index-(basicanchormoremetadata)</value> </property> <property> <name>elastic.host</name> <value>localhost</value> </property> <property> <name>elastic.cluster</name> <value>hist</value> </property> <property> <name>elastic.index</name> <value>nutch</value> </property> <property> <name>parser.character.encoding.default</name> <value>utf-8</value> </property> <property> <name>http.content.limit</name> <value>6553600</value> </property> </configuration> ``` regex-urlfilter.txt的配置如下 ``` # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # The default url filter. # Better for whole-internet crawling. # Each non-comment, non-blank line contains a regular expression # prefixed by '+' or '-'. The first matching pattern in the file # determines whether a URL is included or ignored. If no pattern # matches, the URL is ignored. # skip file: ftp: and mailto: urls -^(file|ftp|mailto): # skip image and other suffixes we can't yet parse # for a more extensive coverage use the urlfilter-suffix plugin -\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$ # skip URLs containing certain characters as probable queries, etc. -[?*!@=] # skip URLs with slash-delimited segment that repeats 3+ times, to break loops -.*(/[^/]+)/[^/]+\1/[^/]+\1/ # accept anything else +^http://([a-z0-9]*\.)*nutch.apache.org/ # +. ``` 另外urls下面的seed.txt配置如下cat ``` [root@jdu4e00u53f7 urls]# pwd /chen/nutch/runtime/local/urls [root@jdu4e00u53f7 urls]# cat seed.txt http://blog.csdn.net/ [root@jdu4e00u53f7 urls]# ``` 最后错误信息如下: ``` 2017-09-25 23:35:17,648 INFO crawl.InjectorJob - InjectorJob: starting at 2017-09-25 23:35:17 2017-09-25 23:35:17,649 INFO crawl.InjectorJob - InjectorJob: Injecting urlDir: urls 2017-09-25 23:35:18,058 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2017-09-25 23:35:19,115 INFO crawl.InjectorJob - InjectorJob: Using class org.apache.gora.mongodb.store.MongoStore as the Gora storage class. 2017-09-25 23:35:20,006 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/staging/root1639902035/.staging/job_local1639902035_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2017-09-25 23:35:20,009 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/staging/root1639902035/.staging/job_local1639902035_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2017-09-25 23:35:20,172 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1639902035_0001/job_local1639902035_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2017-09-25 23:35:20,175 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1639902035_0001/job_local1639902035_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2017-09-25 23:35:20,504 WARN mapred.LocalJobRunner - job_local1639902035_0001 java.lang.Exception: java.lang.RuntimeException: x point org.apache.nutch.net.URLNormalizer not found. at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.RuntimeException: x point org.apache.nutch.net.URLNormalizer not found. at org.apache.nutch.net.URLNormalizers.<init>(URLNormalizers.java:141) at org.apache.nutch.crawl.InjectorJob$UrlMapper.setup(InjectorJob.java:94) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2017-09-25 23:35:21,198 ERROR crawl.InjectorJob - InjectorJob: java.lang.RuntimeException: job failed: name=apache-nutch-2.3.1.jar, jobid=job_local1639902035_0001 at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:120) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:231) at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:252) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:275) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:284) ```

软件测试入门、SQL、性能测试、测试管理工具

软件测试2小时入门,让您快速了解软件测试基本知识,有系统的了解; SQL一小时,让您快速理解和掌握SQL基本语法 jmeter性能测试 ,让您快速了解主流来源性能测试工具jmeter 测试管理工具-禅道,让您快速学会禅道的使用,学会测试项目、用例、缺陷的管理、

计算机组成原理实验教程

西北工业大学计算机组成原理实验课唐都仪器实验帮助,同实验指导书。分为运算器,存储器,控制器,模型计算机,输入输出系统5个章节

Java 最常见的 200+ 面试题:面试必备

这份面试清单是从我 2015 年做了 TeamLeader 之后开始收集的,一方面是给公司招聘用,另一方面是想用它来挖掘在 Java 技术栈中,还有那些知识点是我不知道的,我想找到这些技术盲点,然后修复它,以此来提高自己的技术水平。虽然我是从 2009 年就开始参加编程工作了,但我依旧觉得自己现在要学的东西很多,并且学习这些知识,让我很有成就感和满足感,那所以何乐而不为呢? 说回面试的事,这份面试...

winfrom中嵌套html,跟html的交互

winfrom中嵌套html,跟html的交互,源码就在里面一看就懂,很简单

玩转Python-Python3基础入门

总课时80+,提供源码和相关资料 本课程从Python零基础到纯Python项目实战。内容详细,案例丰富,覆盖了Python知识的方方面面,学完后不仅对Python知识有个系统化的了解,让你从Python小白变编程大牛! 课程包含: 1.python安装 2.变量、数据类型和运算符 3.选择结构 4.循环结构 5.函数和模块 6.文件读写 7.了解面向对象 8.异常处理

程序员的兼职技能课

获取讲师答疑方式: 在付费视频第一节(触摸命令_ALL)片头有二维码及加群流程介绍 限时福利 原价99元,今日仅需39元!购课添加小助手(微信号:csdn590)按提示还可领取价值800元的编程大礼包! 讲师介绍: 苏奕嘉&nbsp;前阿里UC项目工程师 脚本开发平台官方认证满级(六级)开发者。 我将如何教会你通过【定制脚本】赚到你人生的第一桶金? 零基础程序定制脚本开发课程,是完全针对零脚本开发经验的小白而设计,课程内容共分为3大阶段: ①前期将带你掌握Q开发语言和界面交互开发能力; ②中期通过实战来制作有具体需求的定制脚本; ③后期将解锁脚本的更高阶玩法,打通任督二脉; ④应用定制脚本合法赚取额外收入的完整经验分享,带你通过程序定制脚本开发这项副业,赚取到你的第一桶金!

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

基于VHDL的16位ALU简易设计

基于VHDL的16位ALU简易设计,可完成基本的加减、带进位加减、或、与等运算。

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

利用Verilog实现数字秒表(基本逻辑设计分频器练习)

设置复位开关。当按下复位开关时,秒表清零并做好计时准备。在任何情况下只要按下复位开关,秒表都要无条件地进行复位操作,即使是在计时过程中也要无条件地进行清零操作。 设置启/停开关。当按下启/停开关后,将

董付国老师Python全栈学习优惠套餐

购买套餐的朋友可以关注微信公众号“Python小屋”,上传付款截图,然后领取董老师任意图书1本。

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

计算机操作系统 第三版.pdf

计算机操作系统 第三版 本书全面介绍了计算机系统中的一个重要软件——操作系统(OS),本书是第三版,对2001年出版的修订版的各章内容均作了较多的修改,基本上能反映当前操作系统发展的现状,但章节名称基

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

微信公众平台开发入门

本套课程的设计完全是为初学者量身打造,课程内容由浅入深,课程讲解通俗易懂,代码实现简洁清晰。通过本课程的学习,学员能够入门微信公众平台开发,能够胜任企业级的订阅号、服务号、企业号的应用开发工作。 通过本课程的学习,学员能够对微信公众平台有一个清晰的、系统性的认识。例如,公众号是什么,它有什么特点,它能做什么,怎么开发公众号。 其次,通过本课程的学习,学员能够掌握微信公众平台开发的方法、技术和应用实现。例如,开发者文档怎么看,开发环境怎么搭建,基本的消息交互如何实现,常用的方法技巧有哪些,真实应用怎么开发。

150讲轻松搞定Python网络爬虫

【为什么学爬虫?】 &nbsp; &nbsp; &nbsp; &nbsp;1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到! &nbsp; &nbsp; &nbsp; &nbsp;2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是: 网络请求:模拟浏览器的行为从网上抓取数据。 数据解析:将请求下来的数据进行过滤,提取我们想要的数据。 数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。 那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是: 爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。 Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。 通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 &nbsp; 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求! 【课程服务】 专属付费社群+每周三讨论会+1v1答疑

SEIR课程设计源码与相关城市数据.rar

SEIR结合学报与之前博客结合所做的一些改进,选择其中三个城市进行拟合仿真SEIR结合学报与之前博客结合所做的一些改进,选择其中三个城市进行拟合仿真SEIR结合学报与之前博客结合所做的一些改进,选择其

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

2019 AI开发者大会

2019 AI开发者大会(AI ProCon 2019)是由中国IT社区CSDN主办的AI技术与产业年度盛会。多年经验淬炼,如今蓄势待发:2019年9月6-7日,大会将有近百位中美顶尖AI专家、知名企业代表以及千余名AI开发者齐聚北京,进行技术解读和产业论证。我们不空谈口号,只谈技术,诚挚邀请AI业内人士一起共铸人工智能新篇章!

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

定量遥感中文版 梁顺林著 范闻捷译

这是梁顺林的定量遥感的中文版,由范闻捷等翻译的,是电子版PDF,解决了大家看英文费时费事的问题,希望大家下载看看,一定会有帮助的

GIS程序设计教程 基于ArcGIS Engine的C#开发实例

张丰,杜震洪,刘仁义编著.GIS程序设计教程 基于ArcGIS Engine的C#开发实例.浙江大学出版社,2012.05

人工智能-计算机视觉实战之路(必备算法+深度学习+项目实战)

系列课程主要分为3大阶段:(1)首先掌握计算机视觉必备算法原理,结合Opencv进行学习与练手,通过实际视项目进行案例应用展示。(2)进军当下最火的深度学习进行视觉任务实战,掌握深度学习中必备算法原理与网络模型架构。(3)结合经典深度学习框架与实战项目进行实战,基于真实数据集展开业务分析与建模实战。整体风格通俗易懂,项目驱动学习与就业面试。 建议同学们按照下列顺序来进行学习:1.Python入门视频课程 2.Opencv计算机视觉实战(Python版) 3.深度学习框架-PyTorch实战/人工智能框架实战精讲:Keras项目 4.Python-深度学习-物体检测实战 5.后续实战课程按照自己喜好选择就可以

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

微信小程序开发实战之番茄时钟开发

微信小程序番茄时钟视频教程,本课程将带着各位学员开发一个小程序初级实战类项目,针对只看过官方文档而又无从下手的开发者来说,可以作为一个较好的练手项目,对于有小程序开发经验的开发者而言,可以更好加深对小程序各类组件和API 的理解,为更深层次高难度的项目做铺垫。

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

去除异常值matlab程序

数据预处理中去除异常值的程序,matlab写成

用verilog HDL语言编写的秒表

在秒表设计中,分模块书写。用在七段数码管上显示。输入频率是1KHZ.可以显示百分秒,秒,分。如要显示小时,只需修改leds里的代码和主模块代码。改程序以通过硬件电路验证。完全正确。

[透视java——反编译、修补和逆向工程技术]源代码

源代码。

用QUARTUS设计模可变计数器器

用QUARTUS设计摸20|60的模可变计数器,文本设计

随机迷宫路径算法

基于C++写成的路径寻找,能够自动生成随机迷宫,并通过A*算法得到最短路径到达出口,可以直观地看到迷宫的构成以及路径的生成

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

Spring Boot -01- 快速入门篇(图文教程)

Spring Boot -01- 快速入门篇 今天开始不断整理 Spring Boot 2.0 版本学习笔记,大家可以在博客看到我的笔记,然后大家想看视频课程也可以到【慕课网】手机 app,去找【Spring Boot 2.0 深度实践】的课程,令人开心的是,课程完全免费! 什么是 Spring Boot? Spring Boot 是由 Pivotal 团队提供的全新框架。Spring Boot...

相关热词 c#中如何设置提交按钮 c#帮助怎么用 c# 读取合并单元格的值 c#带阻程序 c# 替换span内容 c# rpc c#控制台点阵字输出 c#do while循环 c#调用dll多线程 c#找出两个集合不同的
立即提问