m0_58332232 2022-12-08 01:17 采纳率: 40%
浏览 95
已结题

mapreduce下的求和(运行成功,必采纳)急急急

问题:求震级>6的地震总数
要求:map reduce job分开写

img


以上部分数据代表时刻,震级,经度,纬度,深度,地区。

  • 写回答

4条回答 默认 最新

  • ShowMeAI 2022-12-08 10:39
    关注

    针对你的地震数据,我写的代码解决方案如下,望采纳:

    Map阶段代码:

    import java.io.IOException;
    
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Mapper;
    
    public class EarthquakeMapper extends Mapper<LongWritable, Text, IntWritable, IntWritable> {
    
      private static final IntWritable ONE = new IntWritable(1);
    
      @Override
      public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // 解析地震数据
        String[] data = value.toString().split(",");
        float magnitude = Float.parseFloat(data[1]);
        // 如果震级大于6,输出(1,1)
        if (magnitude > 6) {
          context.write(ONE, ONE);
        }
      }
    }
    

    Reduce阶段代码:

    import java.io.IOException;
    
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.mapreduce.Reducer;
    
    public class EarthquakeReducer extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable> {
    
      @Override
      public void reduce(IntWritable key, Iterable<IntWritable> values, Context context)
          throws IOException, InterruptedException {
        // 计数器初始化为0
        int count = 0;
        // 遍历所有(1,1)的二元组,计数器加1
        for (IntWritable value : values) {
          count += value.get();
        }
        // 输出震级大于6的地震的总数
        context.write(key, new IntWritable(count));
      }
    }
    
    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
  • BsonJ 2022-12-08 09:02
    关注

    数据源在哪里吖

    评论
  • yy64ll826 2022-12-08 09:41
    关注

    你有基础没,我可以把我之前写过类似的发你,你照葫芦画瓢

    评论
  • keeper& 2022-12-08 11:13
    关注
    
    import org.apache.commons.lang.StringUtils;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.NullWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    
    import java.io.IOException;
    import java.net.URI;
    import java.util.Arrays;
    
    public class SumValue {
    
        public static class SumValueMapper extends Mapper<LongWritable, Text, IntWritable, Text> {
    
    
    
            @Override
            protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
    
    
              String level = value.toString().split(",")[1];
                Integer value1 = Integer.valueOf(level);
                if(value1>6){
                    //大于6级
                    context.write(new IntWritable(1),value);
                }else {
                    //小于6级
                    context.write(new IntWritable(0),value);
                }
    
            }
    
    
        }
    
    
        public static class SumValueReducer extends Reducer< IntWritable, Text,Text, IntWritable> {
    
            private int sumValue = 0;
            @Override
            protected void reduce(IntWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
    
                if (key.equals(1)){
                    context.write(new Text("地震大于6级的总数"),new IntWritable(Arrays.asList(values).size()));
                }
    
            }
    
    
        }
    
        public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
    
    
            String input = "F:\\123\\input\\abc.txt";
            String output = "F:\\123\\output";
            Configuration conf = new Configuration();
            if (System.getProperty("os.name").toLowerCase().contains("win"))
                conf.set("mapreduce.app-submission.cross-platform", "true");
    
            Path path = new Path(output);
    
    
            Job job = new Job(conf, "SumValue");
            //job.setJar("./out/artifacts/hadoop_test_jar/hadoop-test.jar");
            job.setJarByClass(SumValue.class);
            job.setMapperClass(SumValueMapper.class);
            job.setReducerClass(SumValueReducer.class);
            job.setMapOutputKeyClass(IntWritable.class);
            job.setMapOutputValueClass(Text.class);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);
            FileInputFormat.addInputPaths(job,  input);
            FileOutputFormat.setOutputPath(job, new Path(output));
    
    
            boolean ret = job.waitForCompletion(true);
            System.out.println(job.getJobName() + "-----" + ret);
        }
    }
    
    评论
    1人已打赏
查看更多回答(3条)

报告相同问题?

问题事件

  • 系统已结题 12月16日
  • 已采纳回答 12月8日
  • 创建了问题 12月8日

悬赏问题

  • ¥15 定义了函数,但是无法根据函数定义触发器
  • ¥20 5变量卡诺图化简得出与非门电路图
  • ¥20 位置依赖的碱基序列独热编码
  • ¥15 Python爬取交通拥堵指数数据
  • ¥15 使用vba抓取重定向网页问题
  • ¥20 付费需求测试程序(细谈)。
  • ¥15 为什么这段c++代码会报这么多语法错误?
  • ¥20 如何利用C语言实现用最小二乘法选配两个经验公式
  • ¥50 vue-codemirror如何对指定行 指定位置的 字符进行背景颜色或者字体颜色的修改?
  • ¥30 遇到一个的问题,请教各位