mapreduce提交作业出现报错信息

我最近采用这种方法提交作业,把mr作业的业务逻辑打包上传至hdfs中,在eclipse中提交任务(不安装hadoop插件),报错Error: java.io.IOException: Unable to initialize any output collector,如果把mr作业的业务逻辑打包上传至hdfs节点上,就可以顺利完成。
这是为什么?
Configuration conf = new Configuration();
conf.set("mapreduce.job.name","count22");
conf.set("mapreduce.job.jar", "hdfs://172.16.200.210:8020/user/root/testjar/hadoopTest.jar");
conf.set("mapreduce.job.output.key.class", "org.apache.hadoop.io.Text");
conf.set("mapreduce.job.output.value.class", "org.apache.hadoop.io.IntWritable");
conf.set("mapred.mapper.class", "hadoop.test$Map");
conf.set("mapred.reducer.class", "hadoop.test$Reduce");
conf.set("mapred.combiner.class", "hadoop.test$Reduce");
conf.set("mapred.input.format.class", "org.apache.hadoop.mapred.TextInputFormat");
conf.set("mapred.output.format.class", "org.apache.hadoop.mapred.TextOutputFormat");
conf.set("mapreduce.input.fileinputformat.inputdir",
"hdfs://172.16.200.210:8020/user/root/input/count.txt");
conf.set("mapreduce.output.fileoutputformat.outputdir",
"hdfs://172.16.200.210:8020/user/root/sc");
JobConf jobconf=new JobConf(conf);
JobClient.runJob(jobconf);

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
运行mapreduce程序 没有报错 但是map没有输出结果?

用的这个代码:https://blog.csdn.net/daihanglai7622/article/details/84760611 本地运行此程序是正确的 有输出结果 但是 放到集群上运行 输出结果为空 查看日志 应该是 map就出错了 ![图片说明](https://img-ask.csdn.net/upload/201911/20/1574218622_84460.png) 本地上能运行 逻辑应该是没问题的 那应该是哪里出了问题? 新手求教 那个啥~我没有c币哎 好像不能悬赏…

在eclipse运行hadoop mapreduce例子报错

在终端运行hadoop带的例子正常,hadoop节点正常,错误如下 17/09/05 20:20:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/09/05 20:20:16 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 17/09/05 20:20:16 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= Exception in thread "main" java.net.ConnectException: Call From master/192.168.1.110 to localhost:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) at org.apache.hadoop.ipc.Client.call(Client.java:1479) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426) at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at mapreduce.Temperature.main(Temperature.java:202) Caused by: java.net.ConnectException: 拒绝连接 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528) at org.apache.hadoop.ipc.Client.call(Client.java:1451) ... 28 more

mapreduce计算的数据导入mysql出错,导入到本地都ok,哪位路过的大佬能帮忙看下

![报错信息](https://img-ask.csdn.net/upload/201909/29/1569747157_413167.png) Driver代码如下 package com.sky.cmcc.offlineComputeMR; import com.sky.cmcc.pojo.MFee; import com.sky.cmcc.pojo.RFee; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.db.DBConfiguration; import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; /** * Classname CmccDriver * Date 2019/9/29 11:08 * Created by Teddys * Description 负责加载配置,启动MR,写入数据到mysql */ public class CmccDriver { // 定义msyql的四项 private static String DriverClass="com.mysql.jdbc.Driver"; private static String url="jdbc:mysql://localhost:3306/bot?characterEncoding=UTF-8"; private static String username="root"; private static String password="123456"; public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { //1 加载配置文件和设定Job Configuration conf = new Configuration(); Job job = Job.getInstance(conf); //连接mysql DBConfiguration.configureDB(conf,DriverClass,url,username,password); //2 设置Job的加载路径 job.setJarByClass(CmccDriver.class); //3 指定job的mapper和reducer的类 job.setMapperClass(CmccMapper.class); job.setReducerClass(CmccReducer.class); //4 设置mqpper和最后的输出类型 job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(MFee.class); job.setOutputKeyClass(RFee.class); job.setOutputValueClass(NullWritable.class); //5 设置输入和输出的路径 FileInputFormat.setInputPaths(job,new Path(args[0])); //注意:输出路径为mysql DBOutputFormat.setOutput(job,"cmcc0513", "day","chargefee","shouldfee","orderCount","chargePayTime","chargeSuccessCount"); //6 提交任务,执行程序 boolean b = job.waitForCompletion(true); System.exit(b ? 0 : 1); System.out.println("b===="+b); } }

本地eclipse运行 hadoop-2.6 mapreduce 报错,求助

报错信息是: 2016-02-26 11:24:07,722 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - session.id is deprecated. Instead, use dfs.metrics.session-id 2016-02-26 11:24:07,727 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId= 2016-02-26 11:24:08,081 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 2016-02-26 11:24:08,091 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(252)) - Cleaning up the staging area file:/tmp/hadoop-fire/mapred/staging/fire1322517587/.staging/job_local1322517587_0001 2016-02-26 11:24:08,095 WARN [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1674)) - PriviledgedActionException as:fire (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/fire/dedup_in Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/fire/dedup_in at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:304) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:321) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:199) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325) at com.hebut.mr.Dedup.main(Dedup.java:135)

Hadoop安装插件运行MapReduce报错

我的虚拟机搭建了HDFS一个集群完全分布式,在win7系统下安装Eclipse插件,运行MapReduce程序报错,好像是连接不上,求大神帮忙![图片](https://img-ask.csdn.net/upload/201608/08/1470613451_63912.jpg)![图片](https://img-ask.csdn.net/upload/201608/08/1470613343_436152.jpg)

hbase mapreduce 报错 java.lang.NullPointerException

http://bbs.csdn.net/topics/390865764 这篇文章出错相似,求教大牛们 2017-09-15 23:19:15 [WARN]-[] Your hostname, admin-PC resolves to a loopback/non-reachable address: fe80:0:0:0:0:5efe:c0a8:164%23, but we couldn't find any external IP address! 2017-09-15 23:19:15 [INFO]-[org.apache.hadoop.conf.Configuration.deprecation] session.id is deprecated. Instead, use dfs.metrics.session-id 2017-09-15 23:19:15 [INFO]-[org.apache.hadoop.metrics.jvm.JvmMetrics] Initializing JVM Metrics with processName=JobTracker, sessionId= Exception in thread "main" java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010) at org.apache.hadoop.util.Shell.runCommand(Shell.java:487) at org.apache.hadoop.util.Shell.run(Shell.java:460) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:720) at org.apache.hadoop.util.Shell.execCommand(Shell.java:813) at org.apache.hadoop.util.Shell.execCommand(Shell.java:796) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:656) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:444) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:308) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:147) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325) at TestOnlyMapper.main(TestOnlyMapper.java:35) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) ----------------------分割线 代码------------------------------------------------- import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; import org.apache.hadoop.hbase.mapreduce.TableMapper; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat; import java.io.IOException; /** * Created by admin on 2017/9/15. */ public class TestOnlyMapper { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = HBaseConfiguration.create(); conf.set("hbase.rootdir","hdfs://hadoop.master:8020/hdfs/hbase"); conf.set("hbase.zookeeper.quorum","hadoop.master,hadoop.slave11,hadoop.slave12"); conf.set("hbase.zookeeper.property.clientPort","2181"); Job job= new Job(conf,"test"); job.setJarByClass(TestOnlyMapper.class); Scan scan = new Scan(); job.setMapSpeculativeExecution(false); job.setReduceSpeculativeExecution(false); TableMapReduceUtil.initTableMapperJob("test11",scan,OMapper.class,null,null,job); job.setOutputFormatClass(NullOutputFormat.class); job.waitForCompletion(true); } } class OMapper extends TableMapper<Text, LongWritable> { @Override protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException { for(Cell cell:value.listCells()) { System.out.println("---------------------"); System.out.println("cell.getQualifier()= "+cell.getQualifier().toString()); System.out.println("---------------------"); } } }

win7下通过eclipse提交MapReduce作业exit code143

2017-05-24 08:20:25,861 FATAL [IPC Server handler 4 on 34806] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1495633327030_0011_m_000002_2 - exited : java.io.IOException: Unable to initialize any output collector at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:412) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:439) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 2017-05-24 08:20:26,651 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1495633327030_0011_m_000000_2: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143

hadoop里运行mapReduce出错

hadoop里用mapReduce分析一本不到4M的小说数据,在hadoop里运行,运行了将近一个半小时才运行完,但是指出来一个_temporary的文件,还报错,java代码是没有问题的,

QuartzJob 调用 hadoop mapreduce 报错

Error: java.io.IOException: com.mysql.jdbc.Driver at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:540) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

hadoop2.5.2 mapreduce作业失败

``` 16/06/14 03:26:45 INFO client.RMProxy: Connecting to ResourceManager at centos1/192.168.6.132:8032 16/06/14 03:26:47 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 16/06/14 03:26:47 INFO input.FileInputFormat: Total input paths to process : 1 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: number of splits:1 16/06/14 03:26:48 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 16/06/14 03:26:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1465885546873_0002 16/06/14 03:26:49 INFO impl.YarnClientImpl: Submitted application application_1465885546873_0002 16/06/14 03:26:49 INFO mapreduce.Job: The url to track the job: http://centos1:8088/proxy/application_1465885546873_0002/ 16/06/14 03:26:49 INFO mapreduce.Job: Running job: job_1465885546873_0002 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 running in uber mode : false 16/06/14 03:27:10 INFO mapreduce.Job: map 0% reduce 0% 16/06/14 03:27:10 INFO mapreduce.Job: Job job_1465885546873_0002 failed with state FAILED due to: Application application_1465885546873_0002 failed 2 times due to Error launching appattempt_1465885546873_0002_000002. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:50334 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy32.startContainers(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:118) at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` 然后错误日志如下 ``` 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlaucher.AMLauncher: Setting up container Container: [ContainerId: container_1465885546873_0002_01_000001, NodeId: local:42709, NodeHttpAddress: local:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 127.0.0.1:42709 }, ] for AM appattempt_1465885546873_0002_000001 2016-06-14 03:26:49,936 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1465885546873_0002_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-06-14 03:26:50,948 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:51,950 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:52,951 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:53,952 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:54,953 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:55,954 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:56,956 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:57,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:58,959 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,960 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: local/127.0.0.1:42709. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-06-14 03:26:59,962 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error launching appattempt_1465885546873_0002_000001. Got exception: java.net.ConnectException: Call From local.localdomain/127.0.0.1 to local:42709 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ``` core-site.xml如下 ``` <configuration> <property> <name>ha.zookeeper.quorum</name> <value>centos1:2181,centos2:2181,centos3:2181</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop2.5</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> </configuration> ``` hdfs-site.xml如下 ``` <configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>centos1,centos2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos1</name> <value>centos1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.centos2</name> <value>centos2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos1</name> <value>centos1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.centos2</name> <value>centos2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://centos2:8485;centos3:8485;centos4:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop-data</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> ``` yarn-site.xml如下 ``` <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>centos1</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>centos1:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>centos1:8033</value> </property> </configuration> ``` mapred-site.xml如下 ``` <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> ``` slaves如下 ``` centos2 centos3 centos4 ``` hosts如下 ``` 127.0.0.1 local local.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.6.132 centos1 192.168.6.133 centos2 192.168.6.134 centos3 192.168.6.135 centos4 ```

关于eclipse中运行mapreduce不是在hadoop集群环境运行而是在本地运行的问题

1.我用eclipse远程连接linux上的hadoop集群,跑Mapreduce程序都可以顺利完成,结果在集群里也可以看得到。 但是,跑程序的时候,我去集群上Jps没有我正在跑的程序 而且,我到job的web界面下,也没有我的MapReduce任务记录。。。 是不是eclipse其实在本地跑的,没有在集群中跑,我无法想明白,还请指教

Hadoop集群执行wordcount出现的一些报错信息

我是一个Hadoop学习的新手,请大家帮助一下,非常的感谢! 我自己在虚拟机使用docker搭建了一个Hadoop集群,docker镜像是使用的ubuntu18.04 首先我的Hadoop1主节点上开启了以下服务: ``` root@hadoop1:/usr/local/hadoop# jps 2058 NameNode 2266 SecondaryNameNode 2445 ResourceManager 2718 Jps ``` 下面是两个从节点的服务: ``` root@hadoop2:~# jps 294 DataNode 550 Jps 406 NodeManager ``` ``` root@hadoop3:~# jps 543 Jps 399 NodeManager 287 DataNode ``` hadoop1(主节点)在云端创建一个/data/input的文件夹结构 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -mkdir -p /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 就是上面的一堆警告,下面我每执行一次bin/hdfs dfs都会有这个警告,请问这种警告对于整个Hadoop集群有没有影响,怎样将这个警告消除。 ``` 下面这是将test1文件推送带云端时也出现同样的报警 root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -put test1 /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是查看推送到云端文件的时候也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hdfs dfs -ls /data/input WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Found 1 items -rw-r--r-- 1 root supergroup 60 2019-09-15 08:07 /data/input/test1 ``` 这是执行share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar也出现这种报警 ``` root@hadoop1:/usr/local/hadoop#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /data/input/test1 /data/output/test1 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` 这是执行后查看wordcount的执行结果也出现这种报警 ``` root@hadoop1:/usr/local/hadoop# bin/hdfs dfs -cat /data/output/test1/part-r-00000 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.9.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release first 1 hello 2 is 2 my 2 test1 1 testwordcount 1 this 2 ``` 有哪位大神能否帮我看一下这个问题如何解决,非常感谢!

【新手】Hadoop MapReduce 执行中Map没有输出

hadoop - hadoop2.6 分布式 - 简单实例学习 - 统计某年的最高温度和按年份将温度从高到底排序 - 原明卓 - 博客频道 - CSDN.NET http://blog.csdn.net/lablenet/article/details/50608197#java 我按照这篇博客做的,运行结果见图。 ----------------------------------------------- 16/10/19 05:27:51 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 16/10/19 05:27:52 INFO input.FileInputFormat: Total input paths to process : 1 16/10/19 05:27:52 INFO util.NativeCodeLoader: Loaded the native-hadoop library 16/10/19 05:27:52 WARN snappy.LoadSnappy: Snappy native library not loaded 16/10/19 05:27:54 INFO mapred.JobClient: Running job: job_201610190234_0013 16/10/19 05:27:55 INFO mapred.JobClient: map 0% reduce 0% 16/10/19 05:28:24 INFO mapred.JobClient: map 100% reduce 0% 16/10/19 05:28:41 INFO mapred.JobClient: map 100% reduce 20% 16/10/19 05:28:42 INFO mapred.JobClient: map 100% reduce 40% 16/10/19 05:28:50 INFO mapred.JobClient: map 100% reduce 46% 16/10/19 05:28:51 INFO mapred.JobClient: map 100% reduce 60% 16/10/19 05:29:01 INFO mapred.JobClient: map 100% reduce 100% 16/10/19 05:29:01 INFO mapred.JobClient: Job complete: job_201610190234_0013 16/10/19 05:29:01 INFO mapred.JobClient: Counters: 28 16/10/19 05:29:01 INFO mapred.JobClient: Job Counters 16/10/19 05:29:01 INFO mapred.JobClient: Launched reduce tasks=6 16/10/19 05:29:01 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=26528 16/10/19 05:29:01 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 16/10/19 05:29:01 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 16/10/19 05:29:01 INFO mapred.JobClient: Launched map tasks=1 16/10/19 05:29:01 INFO mapred.JobClient: Data-local map tasks=1 16/10/19 05:29:01 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=107381 16/10/19 05:29:01 INFO mapred.JobClient: File Output Format Counters 16/10/19 05:29:01 INFO mapred.JobClient: Bytes Written=0 16/10/19 05:29:01 INFO mapred.JobClient: FileSystemCounters 16/10/19 05:29:01 INFO mapred.JobClient: FILE_BYTES_READ=30 16/10/19 05:29:01 INFO mapred.JobClient: HDFS_BYTES_READ=1393 16/10/19 05:29:01 INFO mapred.JobClient: FILE_BYTES_WRITTEN=354256 16/10/19 05:29:01 INFO mapred.JobClient: File Input Format Counters 16/10/19 05:29:01 INFO mapred.JobClient: Bytes Read=1283 16/10/19 05:29:01 INFO mapred.JobClient: Map-Reduce Framework 16/10/19 05:29:01 INFO mapred.JobClient: Map output materialized bytes=30 16/10/19 05:29:01 INFO mapred.JobClient: Map input records=46 16/10/19 05:29:01 INFO mapred.JobClient: Reduce shuffle bytes=30 16/10/19 05:29:01 INFO mapred.JobClient: Spilled Records=0 16/10/19 05:29:01 INFO mapred.JobClient: Map output bytes=0 16/10/19 05:29:01 INFO mapred.JobClient: CPU time spent (ms)=16910 16/10/19 05:29:01 INFO mapred.JobClient: Total committed heap usage (bytes)=195301376 16/10/19 05:29:01 INFO mapred.JobClient: Combine input records=0 16/10/19 05:29:01 INFO mapred.JobClient: SPLIT_RAW_BYTES=110 16/10/19 05:29:01 INFO mapred.JobClient: Reduce input records=0 16/10/19 05:29:01 INFO mapred.JobClient: Reduce input groups=0 16/10/19 05:29:01 INFO mapred.JobClient: Combine output records=0 16/10/19 05:29:01 INFO mapred.JobClient: Physical memory (bytes) snapshot=331567104 16/10/19 05:29:01 INFO mapred.JobClient: Reduce output records=0 16/10/19 05:29:01 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2264113152 16/10/19 05:29:01 INFO mapred.JobClient: Map output records=0 ----------------------------------------------- yyyy-MM-dd HH:mm:ss\t温度 example:1995-10-10 10:10:10 6.54 这是数据源格式,我把 RunJob中的 int year=c.get(1); String hot=ss[1].substring(0,ss[1].lastIndexOf("°C")); KeyPari keyPari=new KeyPari(); keyPari.setYear(year); 中的°C改成了\n。 ----------------------------------------------- 代码和博文的一样,只删掉了MAP里面的IF判断和修改了输入输出路径。求前辈们指教一下为什么会这样,深表感激。

eclipse运行MapReduce程序出现找不到类

如果使用setJarByClass会出现找不到类,这个应该怎么解决呢? ![图片说明](https://img-ask.csdn.net/upload/201912/04/1575464809_363370.png)

MapReduce中reduce函数不执行

准备自己写一个代码熟悉一下mapreduce,但是写好之后发现reduce函数不执行,运行程序也没有报错,逛了很多论坛都没有解决方案,因为是初步接触mapreduce,所以对mapreduce的编程不太了解,希望各位大大帮我看下代码有没有问题。 代码如下: Mapper: ``` package Utils; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class BayMapper extends Mapper<Object, Text, Cell, Text> { @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException{ StringTokenizer itr = new StringTokenizer(value.toString()); Cell[][] cells = new Cell[ClusterConfig.cellRow][ClusterConfig.cellColumn]; int cellx = 0; int celly = 0; for(int i = 0;i<ClusterConfig.cellRow;i++) for(int j = 0;j<ClusterConfig.cellColumn;j++){ cells[i][j] = new Cell(); } while(itr.hasMoreTokens()){ String outValue = new String(itr.nextToken()); System.out.println(outValue); String[] list = outValue.split(","); //list.length = 2; for(int i = 0;i<list.length;i++){ double x; double y; x = Double.valueOf(list[0]); y = Double.valueOf(list[1]); cellx = (int) Math.ceil((x - ClusterConfig.xmin) / ClusterConfig.intervalx); celly = (int) Math.ceil((y - ClusterConfig.ymin) / ClusterConfig.intervaly); //cells[cellx][celly].addnumberPoints(); //传入该格子中点的个数 } context.write(cells[cellx][celly],new Text(outValue)); } } } ``` Reducer: ``` package Utils; import java.io.IOException; import java.util.Iterator; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class BayReducer extends Reducer<Cell, Text, Cell, IntWritable> { @Override protected void reduce(Cell key,Iterable<Text> values,Context context) throws IOException, InterruptedException{ int count = 0; Iterator<Text> iterator = values.iterator(); while(iterator.hasNext()){ count ++; } if(count >= 20){ context.write(key,new IntWritable(count)); } } } ``` Driver: ``` package Cluster; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; import Utils.BayMapper; import Utils.BayReducer; import Utils.Cell; public class ClusterDriver { /** * @param args * @throws IOException * @throws InterruptedException * @throws ClassNotFoundException */ public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration(); conf.set("mapred.job.tracker", "localhost:9000"); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: Data Cluster <in> <out>"); System.exit(2); } @SuppressWarnings("deprecation") Job job = new Job(conf, "Baymax Cluster"); job.setJarByClass(ClusterDriver.class); job.setMapperClass(BayMapper.class); job.setReducerClass(BayReducer.class); job.setOutputKeyClass(Cell.class); job.setOutputValueClass(IntWritable.class); Path in = new Path(otherArgs[0]); Path out = new Path(otherArgs[1]); FileInputFormat.addInputPath(job, in);// 设置输入路径 FileOutputFormat.setOutputPath(job, out);// 设置输出路径 System.exit(job.waitForCompletion(true) ? 0 : 1); } } ```

hive 不能使用mapreduce进行查询?

hadoop版本:chd5.1.0 hive版本:hive0.12-chd5.1.0 web页面:hadoop,yarn都正常启动,页面监控正常 hive我做测试用,使用的是默认的derby数据库,hive-env.sh,配置了hadoop的路径, hive-site.xml使用默认的,没有做任何修改 接着做简单的测试: hive> select count(*)from hive_sum; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120) at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82) at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:472) at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:450) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:402) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1485) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1263) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:921) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

运行mapredurce出现Method threw 'java.lang.IllegalStateException' exception. Cannot evaluate org.apache.hadoop.mapreduce.Job.toString()

执行下述代码后在,创建job后会有上述异常,但是可以执行到最后,但是job没有提交上去执行,在历史里也看不到有执行记录求帮助新手o(╥﹏╥)o。 package MapReducer; import com.sun.org.apache.bcel.internal.generic.RETURN; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import java.io.File; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.util.StringTokenizer; /** * @Describe MapReducer第一个读取文档并计数 * @Author zhanglei * @Date 2019/11/18 22:53 **/ public class WordCountApp extends Configured implements Tool { public int run(String[] strings) throws Exception { String input_path="hdfs://192.168.91.130:8020/data/wc.txt"; String output_path="hdfs://192.168.91.130:8020/data/outputwc"; Configuration configuration = getConf(); final FileSystem fileSystem = FileSystem.get(new URI(input_path),configuration); if(fileSystem.exists(new Path(output_path))){ fileSystem.delete(new Path(output_path),true); } Job job = Job.getInstance(configuration,"WordCountApp"); job.setJarByClass(WordCountApp.class); job.setMapperClass(WordCountMapper.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setReducerClass(WordCountReducer.class); job.setInputFormatClass(TextInputFormat.class); Path inpath = new Path(input_path); FileInputFormat.addInputPath(job,inpath); job.setOutputFormatClass(TextOutputFormat.class); Path outpath = new Path(output_path); FileOutputFormat.setOutputPath(job,outpath); return job.waitForCompletion(true) ? 0:1; } //继承 public static class WordCountMapper extends Mapper<Object,Text,Text,IntWritable>{ private final static IntWritable one= new IntWritable(1); private Text word = new Text(); public void map(Object key,Text value,Context context) throws IOException, InterruptedException { Text t = value; StringTokenizer itr = new StringTokenizer(value.toString()); while(itr.hasMoreTokens()){ word.set(itr.nextToken()); context.write(word,one); } } } public static class WordCountReducer extends Reducer<Object,Text,Text,IntWritable>{ private final static IntWritable res= new IntWritable(1); public void reduce(Text key,Iterable<IntWritable> values,Context context) throws IOException, InterruptedException { int sum = 0; for(IntWritable val:values){ sum+=val.get(); } res.set(sum); context.write(key,res); } } public static void main(String[] args) throws Exception { int exitCode = ToolRunner.run(new WordCountApp(), args); System.exit(exitCode); } }

MapReducer 写入到数据库 报错

## 【 DBUserWritable 类 】 package org.neworigin.com.Database; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.mapreduce.lib.db.DBWritable; public class DBUserWritable implements DBWritable,WritableComparable{ private String name=""; private String sex=""; private int age=0; private int num=0; private String department=""; private String tables=""; @Override public String toString() { return "DBUserWritable [name=" + name + ", sex=" + sex + ", age=" + age + ", department=" + department + "]"; } public DBUserWritable(DBUserWritable d){ this.name=d.getName(); this.sex=d.getSex(); this.age=d.getAge(); this.num=d.getNum(); this.department=d.getDepartment(); this.tables=d.getTables(); } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getSex() { return sex; } public void setSex(String sex) { this.sex = sex; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public int getNum() { return num; } public void setNum(int num) { this.num = num; } public String getDepartment() { return department; } public void setDepartment(String department) { this.department = department; } public String getTables() { return tables; } public void setTables(String tables) { this.tables = tables; } public DBUserWritable(String name, String sex, int age, int num, String department, String tables) { super(); this.name = name; this.sex = sex; this.age = age; this.num = num; this.department = department; this.tables = tables; } public DBUserWritable() { super(); // TODO Auto-generated constructor stub } public void write(DataOutput out) throws IOException { // TODO Auto-generated method stub out.writeUTF(name); out.writeUTF(sex); out.writeInt(age); out.writeInt(num); out.writeUTF(department); out.writeUTF(tables); } public void readFields(DataInput in) throws IOException { // TODO Auto-generated method stub name = in.readUTF(); sex=in.readUTF(); age=in.readInt(); num=in.readInt(); department=in.readUTF(); tables=in.readUTF(); } public int compareTo(Object o) { // TODO Auto-generated method stub return 0; } public void write(PreparedStatement statement) throws SQLException { // TODO Auto-generated method stub statement.setString(1, this.getName()); statement.setString(2, this.getSex()); statement.setInt(3, this.getAge()); statement.setString(4, this.getDepartment()); } public void readFields(ResultSet resultSet) throws SQLException { // TODO Auto-generated method stub this.name=resultSet.getString(1); this.sex=resultSet.getString(2); this.age=resultSet.getInt(3); this.department=resultSet.getString(4); } } ## 【mapper】 package org.neworigin.com.Database; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class UserDBMapper extends Mapper<LongWritable, Text, Text, DBUserWritable> { DBUserWritable DBuser= new DBUserWritable(); @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, DBUserWritable>.Context context) throws IOException, InterruptedException { String[] values=value.toString().split(" "); if(values.length==4){ DBuser.setName(values[0]); DBuser.setSex(values[1]); DBuser.setAge(Integer.parseInt(values[2])); DBuser.setNum(Integer.parseInt(values[3])); DBuser.setTables("t1"); System.out.println("mapper---t1---------------"+DBuser); context.write(new Text(values[3]),DBuser); } if(values.length==2){ DBuser.setNum(Integer.parseInt(values[0])); DBuser.setDepartment(values[1]); DBuser.setTables("t2"); context.write(new Text(values[0]),DBuser); //System.out.println("mapper --t2"+"--"+values[0]+"----"+DBuser); } } } ## 【reducer 】 package org.neworigin.com.Database; import java.io.IOException; import java.util.LinkedList; import java.util.List; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class UserDBReducer extends Reducer<Text, DBUserWritable,NullWritable,DBUserWritable> { // public DBUserWritable db= new DBUserWritable(); @Override protected void reduce(Text k2, Iterable<DBUserWritable> v2, Reducer<Text, DBUserWritable, NullWritable,DBUserWritable>.Context context) throws IOException, InterruptedException { String Name=""; List<DBUserWritable> list=new LinkedList<DBUserWritable>(); for(DBUserWritable val : v2){ list.add(new DBUserWritable(val));//new 一个对象 给list // System.out.println("[table]"+val.getTables()+"----key"+k2+"---"+val); if(val.getTables().equals("t2")){ Name=val.getDepartment(); } } //键是 num for(DBUserWritable join : list){ System.out.println("[table]"+join.getTables()+"----key"+k2+"---"+join); if(join.getTables().equals("t1")){ join.setDepartment(Name); System.out.println("db-----"+join); context.write(NullWritable.get(), join); } } } } ## 【app】 package org.neworigin.com.Database; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.db.DBConfiguration; import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class UserDBAPP { public static void main(String[] args) throws Exception, URISyntaxException { // TODO Auto-generated method stub String INPUT_PATH="file:///E:/BigData_eclipse_database/Database/data/table1"; String INPUT_PATH1="file:///E:/BigData_eclipse_database/Database/data/table2"; // String OUTPUT_PARH="file:///E:/BigData_eclipse_database/Database/data/output"; Configuration conf = new Configuration(); // FileSystem fs=FileSystem.get(new URI(OUTPUT_PARH),conf); // if(fs.exists(new Path(OUTPUT_PARH))){ // fs.delete(new Path(OUTPUT_PARH)); // } Job job = new Job(conf,"mydb"); //设置数据库配置 DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://localhost/hadoop", "root", "123456"); FileInputFormat.addInputPaths(job,INPUT_PATH); FileInputFormat.addInputPaths(job,INPUT_PATH1); job.setMapperClass(UserDBMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(DBUserWritable.class); job.setReducerClass(UserDBReducer.class); job.setOutputKeyClass(NullWritable.class); job.setOutputValueClass(DBUserWritable.class); // FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PARH)); //设置输出路径 DBOutputFormat.setOutput(job,"user_tables", "name","sex","age","department"); job.setOutputFormatClass(DBOutputFormat.class); boolean re = job.waitForCompletion(true); System.out.println(re); } } 【报错】ps 表链接 ,写到本地没问题 写到数据库 就报错; 17/11/10 11:39:11 WARN output.FileOutputCommitter: Output Path is null in cleanupJob() 17/11/10 11:39:11 WARN mapred.LocalJobRunner: job_local1812680657_0001 java.lang.Exception: java.io.IOException at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) Caused by: java.io.IOException at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:541) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 running in uber mode : false 17/11/10 11:39:12 INFO mapreduce.Job: map 100% reduce 0% 17/11/10 11:39:12 INFO mapreduce.Job: Job job_local1812680657_0001 failed with state FAILED due to: NA 17/11/10 11:39:12 INFO mapreduce.Job: Counters: 35

java mapreduce wordcount 统计出现次数

一天时间内,每个小时内该网页有多少次访问记录,这么多访问记录中有多少个用户,如下格式: hourid url 0 com 0 com 0 cn 0 net 输出格式类似下面: hourid visitscount userscount 0 4 3 如果用wordcount的那种实现方法的话,都是根据key来直接累加value的,实在不知道该怎么弄,还请大神支招 有几条访问记录,visitscount就是几;userscount指的是(com,cn,net) 最后能输出是csv格式的文档就更好了

MySQL 8.0.19安装教程(windows 64位)

话不多说直接开干 目录 1-先去官网下载点击的MySQL的下载​ 2-配置初始化的my.ini文件的文件 3-初始化MySQL 4-安装MySQL服务 + 启动MySQL 服务 5-连接MySQL + 修改密码 先去官网下载点击的MySQL的下载 下载完成后解压 解压完是这个样子 配置初始化的my.ini文件的文件 ...

Python+OpenCV计算机视觉

Python+OpenCV计算机视觉系统全面的介绍。

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

navicat(内含激活码)

navicat支持mysql的可视化操作,内涵激活码,不用再忍受弹框的痛苦。

HTML期末大作业

这是我自己做的HTML期末大作业,花了很多时间,稍加修改就可以作为自己的作业了,而且也可以作为学习参考

150讲轻松搞定Python网络爬虫

【为什么学爬虫?】 &nbsp; &nbsp; &nbsp; &nbsp;1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到! &nbsp; &nbsp; &nbsp; &nbsp;2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是: 网络请求:模拟浏览器的行为从网上抓取数据。 数据解析:将请求下来的数据进行过滤,提取我们想要的数据。 数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。 那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是: 爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。 Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。 通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 &nbsp; 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求! 【课程服务】 专属付费社群+每周三讨论会+1v1答疑

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

基于STM32的电子时钟设计

时钟功能 还有闹钟功能,温湿度功能,整点报时功能 你值得拥有

学生成绩管理系统(PHP + MYSQL)

做的是数据库课程设计,使用的php + MySQL,本来是黄金搭配也就没啥说的,推荐使用wamp服务器,里面有详细的使用说明,带有界面的啊!呵呵 不行的话,可以给我留言!

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

程序员的兼职技能课

获取讲师答疑方式: 在付费视频第一节(触摸命令_ALL)片头有二维码及加群流程介绍 限时福利 原价99元,今日仅需39元!购课添加小助手(微信号:itxy41)按提示还可领取价值800元的编程大礼包! 讲师介绍: 苏奕嘉&nbsp;前阿里UC项目工程师 脚本开发平台官方认证满级(六级)开发者。 我将如何教会你通过【定制脚本】赚到你人生的第一桶金? 零基础程序定制脚本开发课程,是完全针对零脚本开发经验的小白而设计,课程内容共分为3大阶段: ①前期将带你掌握Q开发语言和界面交互开发能力; ②中期通过实战来制作有具体需求的定制脚本; ③后期将解锁脚本的更高阶玩法,打通任督二脉; ④应用定制脚本合法赚取额外收入的完整经验分享,带你通过程序定制脚本开发这项副业,赚取到你的第一桶金!

实用主义学Python(小白也容易上手的Python实用案例)

原价169,限时立减100元! 系统掌握Python核心语法16点,轻松应对工作中80%以上的Python使用场景! 69元=72讲+源码+社群答疑+讲师社群分享会&nbsp; 【哪些人适合学习这门课程?】 1)大学生,平时只学习了Python理论,并未接触Python实战问题; 2)对Python实用技能掌握薄弱的人,自动化、爬虫、数据分析能让你快速提高工作效率; 3)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; 4)想修炼更好的编程内功,优秀的工程师肯定不能只会一门语言,Python语言功能强大、使用高效、简单易学。 【超实用技能】 从零开始 自动生成工作周报 职场升级 豆瓣电影数据爬取 实用案例 奥运冠军数据分析 自动化办公:通过Python自动化分析Excel数据并自动操作Word文档,最终获得一份基于Excel表格的数据分析报告。 豆瓣电影爬虫:通过Python自动爬取豆瓣电影信息并将电影图片保存到本地。 奥运会数据分析实战 简介:通过Python分析120年间奥运会的数据,从不同角度入手分析,从而得出一些有趣的结论。 【超人气老师】 二两 中国人工智能协会高级会员 生成对抗神经网络研究者 《深入浅出生成对抗网络:原理剖析与TensorFlow实现》一书作者 阿里云大学云学院导师 前大型游戏公司后端工程师 【超丰富实用案例】 0)图片背景去除案例 1)自动生成工作周报案例 2)豆瓣电影数据爬取案例 3)奥运会数据分析案例 4)自动处理邮件案例 5)github信息爬取/更新提醒案例 6)B站百大UP信息爬取与分析案例 7)构建自己的论文网站案例

Java8零基础入门视频教程

这门课程基于主流的java8平台,由浅入深的详细讲解了java SE的开发技术,可以使java方向的入门学员,快速扎实的掌握java开发技术!

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

零基础学C#编程—C#从小白到大咖

本课程从初学者角度出发,提供了C#从入门到成为程序开发高手所需要掌握的各方面知识和技术。 【课程特点】 1 由浅入深,编排合理; 2 视频讲解,精彩详尽; 3 丰富实例,轻松易学; 4 每章总结配有难点解析文档。 15大章节,228课时,1756分钟与你一同进步!

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

多功能数字钟.zip

利用数字电子计数知识设计并制作的数字电子钟(含multisim仿真),该数字钟具有显示星期、24小时制时间、闹铃、整点报时、时间校准功能

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

想学好JAVA必须要报两万的培训班吗? Java大神勿入 如果你: 零基础想学JAVA却不知道从何入手 看了一堆书和视频却还是连JAVA的环境都搭建不起来 囊中羞涩面对两万起的JAVA培训班不忍直视 在职没有每天大块的时间专门学习JAVA 那么恭喜你找到组织了,在这里有: 1. 一群志同道合立志学好JAVA的同学一起学习讨论JAVA 2. 灵活机动的学习时间完成特定学习任务+每日编程实战练习 3. 热心助人的助教和讲师及时帮你解决问题,不按时完成作业小心助教老师的家访哦 上一张图看看前辈的感悟: &nbsp; &nbsp; 大家一定迫不及待想知道什么是极简JAVA学习营了吧,下面就来给大家说道说道: 什么是极简JAVA学习营? 1. 针对Java小白或者初级Java学习者; 2. 利用9天时间,每天1个小时时间; 3.通过 每日作业 / 组队PK / 助教答疑 / 实战编程 / 项目答辩 / 社群讨论 / 趣味知识抢答等方式让学员爱上学习编程 , 最终实现能独立开发一个基于控制台的‘库存管理系统’ 的学习模式 极简JAVA学习营是怎么学习的? &nbsp; 如何报名? 只要购买了极简JAVA一:JAVA入门就算报名成功! &nbsp;本期为第四期极简JAVA学习营,我们来看看往期学员的学习状态: 作业看这里~ &nbsp; 助教的作业报告是不是很专业 不交作业打屁屁 助教答疑是不是很用心 &nbsp; 有奖抢答大家玩的很嗨啊 &nbsp; &nbsp; 项目答辩终于开始啦 &nbsp; 优秀者的获奖感言 &nbsp; 这是答辩项目的效果 &nbsp; &nbsp; 这么细致的服务,这么好的氛围,这样的学习效果,需要多少钱呢? 不要1999,不要199,不要99,只要9.9 是的你没听错,只要9.9以上所有就都属于你了 如果你: 1、&nbsp;想学JAVA没有基础 2、&nbsp;想学JAVA没有整块的时间 3、&nbsp;想学JAVA没有足够的预算 还等什么?赶紧报名吧,抓紧抢位,本期只招300人,错过只有等时间待定的下一期了 &nbsp; 报名请加小助手微信:eduxy-1 &nbsp; &nbsp;

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

机器学习实战系列套餐(必备基础+经典算法+案例实战)

机器学习实战系列套餐以实战为出发点,帮助同学们快速掌握机器学习领域必备经典算法原理并结合Python工具包进行实战应用。建议学习顺序:1.Python必备工具包:掌握实战工具 2.机器学习算法与实战应用:数学原理与应用方法都是必备技能 3.数据挖掘实战:通过真实数据集进行项目实战。按照下列课程顺序学习即可! 课程风格通俗易懂,用最接地气的方式带领大家轻松进军机器学习!提供所有课程代码,PPT与实战数据,有任何问题欢迎随时与我讨论。

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

几率大的Redis面试题(含答案)

本文的面试题如下: Redis 持久化机制 缓存雪崩、缓存穿透、缓存预热、缓存更新、缓存降级等问题 热点数据和冷数据是什么 Memcache与Redis的区别都有哪些? 单线程的redis为什么这么快 redis的数据类型,以及每种数据类型的使用场景,Redis 内部结构 redis的过期策略以及内存淘汰机制【~】 Redis 为什么是单线程的,优点 如何解决redis的并发竞争key问题 Red...

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

【数据结构与算法综合实验】欢乐连连看(C++ & MFC)案例

这是武汉理工大学计算机学院数据结构与算法综合实验课程的第三次项目:欢乐连连看(C++ & MFC)迭代开发代码。运行环境:VS2017。已经实现功能:开始游戏、消子、判断胜负、提示、重排、计时、帮助。

YOLOv3目标检测实战:训练自己的数据集

YOLOv3是一种基于深度学习的端到端实时目标检测方法,以速度快见长。本课程将手把手地教大家使用labelImg标注和使用YOLOv3训练自己的数据集。课程分为三个小项目:足球目标检测(单目标检测)、梅西目标检测(单目标检测)、足球和梅西同时目标检测(两目标检测)。 本课程的YOLOv3使用Darknet,在Ubuntu系统上做项目演示。包括:安装Darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 Darknet是使用C语言实现的轻型开源深度学习框架,依赖少,可移植性好,值得深入探究。 除本课程《YOLOv3目标检测实战:训练自己的数据集》外,本人推出了有关YOLOv3目标检测的系列课程,请持续关注该系列的其它课程视频,包括: 《YOLOv3目标检测实战:交通标志识别》 《YOLOv3目标检测:原理与源码解析》 《YOLOv3目标检测:网络模型改进方法》 敬请关注并选择学习!

u-boot-2015.07.tar.bz2

uboot-2015-07最新代码,喜欢的朋友请拿去

相关热词 c# 局部 截图 页面 c#实现简单的文件管理器 c# where c# 取文件夹路径 c# 对比 当天 c# fir 滤波器 c# 和站 队列 c# txt 去空格 c#移除其他类事件 c# 自动截屏
立即提问