• 头哥实践平台之MapReduce基础实战


    一. 第1关:成绩统计

    编程要求

    使用MapReduce计算班级每个学生的最好成绩,输入文件路径为/user/test/input,请将计算后的结果输出到/user/test/output/目录下。

    先写命令行,如下:
    一行就是一个命令

    touch file01
    echo Hello World Bye World
    cat file01
    echo Hello World Bye World >file01
    cat file01
    touch file02
    echo Hello Hadoop Goodbye Hadoop >file02
    cat file02
    start-dfs.sh
    hadoop fs -mkdir /usr
    hadoop fs -mkdir /usr/input
    hadoop fs -ls /usr/output
    hadoop fs -ls /
    hadoop fs -ls /usr
    hadoop fs -put file01 /usr/input
    hadoop fs -put file02 /usr/input
    hadoop fs -ls /usr/input
    
    
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20

    代码段部分:

    import java.util.StringTokenizer;
     
    import java.io.IOException;
    import java.util.StringTokenizer;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.*;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.GenericOptionsParser;
    
    public class WordCount {
        /********** Begin **********/
    	//Mapper函数
        public static class TokenizerMapper extends Mapper {
            private final static IntWritable one = new IntWritable(1);
            private Text word = new Text();
            private int maxValue = 0;
            public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
                StringTokenizer itr = new StringTokenizer(value.toString(),"\n");
                while (itr.hasMoreTokens()) {
                    String[] str = itr.nextToken().split(" ");
                    String name = str[0];
                    one.set(Integer.parseInt(str[1]));
                    word.set(name);
                    context.write(word,one);
                }
                //context.write(word,one);
            }
        }
        public static class IntSumReducer extends Reducer {
            private IntWritable result = new IntWritable();
            public void reduce(Text key, Iterable values, Context context)
                    throws IOException, InterruptedException {
                int maxAge = 0;
                int age = 0;
                for (IntWritable intWritable : values) {
                    maxAge = Math.max(maxAge, intWritable.get());
                }
                result.set(maxAge);
                context.write(key, result);
            }
        }
        public static void main(String[] args) throws Exception {
            Configuration conf = new Configuration();
            Job job = new Job(conf, "word count");
            job.setJarByClass(WordCount.class);
            job.setMapperClass(TokenizerMapper.class);
            job.setCombinerClass(IntSumReducer.class);
            job.setReducerClass(IntSumReducer.class);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);
            String inputfile = "/user/test/input";
            String outputFile = "/user/test/output/";
            FileInputFormat.addInputPath(job, new Path(inputfile));
            FileOutputFormat.setOutputPath(job, new Path(outputFile));
            job.waitForCompletion(true);
        /********** End **********/
        }
    }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65

    二. 第2关:文件内容合并去重

    编程要求

    接下来我们通过一个练习来巩固学习到的MapReduce知识吧。

    对于两个输入文件,即文件file1和文件file2,请编写MapReduce程序,对两个文件进行合并,并剔除其中重复的内容,得到一个新的输出文件file3。
    为了完成文件合并去重的任务,你编写的程序要能将含有重复内容的不同文件合并到一个没有重复的整合文件,规则如下:

    第一列按学号排列;
    学号相同,按x,y,z排列;
    输入文件路径为:/user/tmp/input/;
    输出路径为:/user/tmp/output/。
    注意:输入文件后台已经帮你创建好了,不需要你再重复创建。

    请先启动Hadoop再点击评测!
    所以要先在命令行输入下面启动命令

    
    start-dfs.sh
    
    
    • 1
    • 2
    • 3
    
    import java.io.IOException;
    
    import java.util.*;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.*;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.GenericOptionsParser;
    
    public class Merge {
    
    	/**
    	 * @param args
    	 * 对A,B两个文件进行合并,并剔除其中重复的内容,得到一个新的输出文件C
    	 */
    	//在这重载map函数,直接将输入中的value复制到输出数据的key上 注意在map方法中要抛出异常:throws IOException,InterruptedException
    	public static class Map  extends Mapper<Object, Text, Text, Text>{
    	
        /********** Begin **********/
    
            public void map(Object key, Text value, Context content) 
                throws IOException, InterruptedException {  
                Text text1 = new Text();
                Text text2 = new Text();
                StringTokenizer itr = new StringTokenizer(value.toString());
                while (itr.hasMoreTokens()) {
                    text1.set(itr.nextToken());
                    text2.set(itr.nextToken());
                    content.write(text1, text2);
                }
            }  
    	/********** End **********/
    	} 
    		
    	//在这重载reduce函数,直接将输入中的key复制到输出数据的key上  注意在reduce方法上要抛出异常:throws IOException,InterruptedException
    	public static class  Reduce extends Reducer<Text, Text, Text, Text> {
        /********** Begin **********/
            
            public void reduce(Text key, Iterable<Text> values, Context context) 
                throws IOException, InterruptedException {
                Set<String> set = new TreeSet<String>();
                for(Text tex : values){
                    set.add(tex.toString());
                }
                for(String tex : set){
                    context.write(key, new Text(tex));
                }
            }  
        
    	/********** End **********/
    
    	}
    	
    	public static void main(String[] args) throws Exception{
    
    		// TODO Auto-generated method stub
    		Configuration conf = new Configuration();
    		conf.set("fs.default.name","hdfs://localhost:9000");
    		
    		Job job = Job.getInstance(conf,"Merge and duplicate removal");
    		job.setJarByClass(Merge.class);
    		job.setMapperClass(Map.class);
    		job.setCombinerClass(Reduce.class);
    		job.setReducerClass(Reduce.class);
    		job.setOutputKeyClass(Text.class);
    		job.setOutputValueClass(Text.class);
    		String inputPath = "/user/tmp/input/";  //在这里设置输入路径
    		String outputPath = "/user/tmp/output/";  //在这里设置输出路径
    
    		FileInputFormat.addInputPath(job, new Path(inputPath));
    		FileOutputFormat.setOutputPath(job, new Path(outputPath));
    		System.exit(job.waitForCompletion(true) ? 0 : 1);
    	}
    
    }
    
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82

    三. 第3关:信息挖掘 - 挖掘父子关系

    编程要求

    你编写的程序要能挖掘父子辈关系,给出祖孙辈关系的表格。规则如下:

    孙子在前,祖父在后;
    输入文件路径:/user/reduce/input;
    输出文件路径:/user/reduce/output。

    请先启动Hadoop再点击评测!
    所以要先在命令行输入下面启动命令

    
    start-dfs.sh
    
    
    • 1
    • 2
    • 3
    
    import java.io.IOException;
    import java.util.*;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.GenericOptionsParser;
    
    public class simple_data_mining {
    	public static int time = 0;
    
    	/**
    	 * @param args
    	 * 输入一个child-parent的表格
    	 * 输出一个体现grandchild-grandparent关系的表格
    	 */
    	//Map将输入文件按照空格分割成child和parent,然后正序输出一次作为右表,反序输出一次作为左表,需要注意的是在输出的value中必须加上左右表区别标志
    	public static class Map extends Mapper<Object, Text, Text, Text>{
    		public void map(Object key, Text value, Context context) throws IOException,InterruptedException{
    			/********** Begin **********/
    		String line = value.toString();
                 String[] childAndParent = line.split(" ");
                 List<String> list = new ArrayList<>(2);
                  for (String childOrParent : childAndParent) {
                     if (!"".equals(childOrParent)) {
                         list.add(childOrParent);
                      } 
                  } 
                  if (!"child".equals(list.get(0))) {
                      String childName = list.get(0);
                      String parentName = list.get(1);
                      String relationType = "1";
                      context.write(new Text(parentName), new Text(relationType + "+"
                            + childName + "+" + parentName));
                      relationType = "2";
                      context.write(new Text(childName), new Text(relationType + "+"
                            + childName + "+" + parentName));
                  }
    			/********** End **********/
    		}
    	}
    
    	public static class Reduce extends Reducer<Text, Text, Text, Text>{
    		public void reduce(Text key, Iterable<Text> values,Context context) throws IOException,InterruptedException{
    				/********** Begin **********/
    
    			    //输出表头
              if (time == 0) {
                    context.write(new Text("grand_child"), new Text("grand_parent"));
                    time++;
                }
    
    				//获取value-list中value的child
    List<String> grandChild = new ArrayList<>();
    
    				//获取value-list中value的parent
     List<String> grandParent = new ArrayList<>();
    
    				//左表,取出child放入grand_child
     for (Text text : values) {
                    String s = text.toString();
                    String[] relation = s.split("\\+");
                    String relationType = relation[0];
                    String childName = relation[1];
                    String parentName = relation[2];
                    if ("1".equals(relationType)) {
                        grandChild.add(childName);
                    } else {
                        grandParent.add(parentName);
                    }
                }
    
    				//右表,取出parent放入grand_parent
     int grandParentNum = grandParent.size();
                   int grandChildNum = grandChild.size();
                   if (grandParentNum != 0 && grandChildNum != 0) {
                    for (int m = 0; m < grandChildNum; m++) {
                        for (int n = 0; n < grandParentNum; n++) {
                            //输出结果
                        context.write(new Text(grandChild.get(m)), new Text(
                                    grandParent.get(n)));
                        }
                    }
                }
    				/********** End **********/
    		}
    	}
    	public static void main(String[] args) throws Exception{
    		// TODO Auto-generated method stub
    		Configuration conf = new Configuration();
    		Job job = Job.getInstance(conf,"Single table join");
    		job.setJarByClass(simple_data_mining.class);
    		job.setMapperClass(Map.class);
    		job.setReducerClass(Reduce.class);
    		job.setOutputKeyClass(Text.class);
    		job.setOutputValueClass(Text.class);
    		String inputPath = "/user/reduce/input";   //设置输入路径
    		String outputPath = "/user/reduce/output";   //设置输出路径
    		FileInputFormat.addInputPath(job, new Path(inputPath));
    		FileOutputFormat.setOutputPath(job, new Path(outputPath));
    		System.exit(job.waitForCompletion(true) ? 0 : 1);
    
    	}
    }
    
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
  • 相关阅读:
    [Java基础揉碎]坦克大战 && java事件处理机制
    佩戴最稳固的蓝牙运动耳机、好用的运动耳机推荐
    Go(Golang)编程语言
    【必知必会】手把手教你配置MySQL环境变量——图文详解
    vue - vue项目使用BOS (百度云对象存储)上传文件
    H7-TOOL发布V2.19,脱机烧录新增中微半导体、广芯微电子、中移芯昇以及极海和灵动新系列,增加PWM发生器等功能(2022-11-17)
    一个高精度24位ADC芯片ADS1222的使用方法及参考电路程序成都控制器定制
    软件测试面试题及答案,2022最强版
    Java_Validation分组校验
    muduo源码剖析之channel通道类
  • 原文地址:https://blog.csdn.net/m0_74459049/article/details/134259095