1)什么是序列化
序列化就是把内存中的对象,转换成字节序列(或其他数据传输协议)以便于存储到磁盘(持久化)和网络传输。
反序列化就是将收到字节序列(或其他数据传输协议)或者是磁盘的持久化数据,转换成内存中的对象。
2)为什么要序列化
一般来说,“活的”对象只生存在内存里,关机断电就没有了。而且“活的”对象只能由本地的进程使用,不能被发送到网络上的另外一台计算机。 然而序列化可以存储“活的”对象,可以将“活的”对象发送到远程计算机。
3)为什么不用Java的序列化
Java的序列化是一个重量级序列化框架(Serializable),一个对象被序列化后,会附带很多额外的信息(各种校验信息,Header,继承体系等),不便于在网络中高效传输。所以,Hadoop自己开发了一套序列化机制(Writable)。
4)Hadoop序列化特点
在企业开发中往往常用的基本序列化类型不能满足所有需求,比如在Hadoop框架内部传递一个bean对象,那么该对象就需要实现序列化接口。
具体实现bean对象序列化步骤如下:
(1)必须实现Writable接口
(2)反序列化时,需要反射调用空参构造函数,所以必须有空参构造
(3)重写序列化方法
(4)重写反序列化方法
(5)注意反序列化的顺序和序列化的顺序完全一致。
(6)要想把结果显示在文件中,需要重写toString(),可用"\t"分开,方便后续用。
统计每一个手机号耗费的总上行流量、总下行流量、总流量
输入数据
期望输出数据格式
需求分析
编写MapReduce程序
package com.lhl.mapreduce.writeable;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
/**
* @ClassName: FlowBean
* @author: lei lei
* @date: 2022/08/23/9:50
* @describe:
*/
public class FlowBean implements Writable {
private Long upFlow;
private Long downFlow;
private Long sumFlow;
public FlowBean() {
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow;
}
public Long getUpFlow() {
return upFlow;
}
public void setUpFlow(Long upFlow) {
this.upFlow = upFlow;
}
public Long getDownFlow() {
return downFlow;
}
public void setDownFlow(Long downFlow) {
this.downFlow = downFlow;
}
@Override
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(upFlow);
dataOutput.writeLong(downFlow);
dataOutput.writeLong(sumFlow);
}
@Override
public void readFields(DataInput dataInput) throws IOException {
upFlow = dataInput.readLong();
downFlow = dataInput.readLong();
sumFlow = dataInput.readLong();
}
public void setSumFlow() {
sumFlow = upFlow + downFlow;
}
}
package com.lhl.mapreduce.writeable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
/**
* @ClassName: FlowMapper
* @author: lei lei
* @date: 2022/08/23/9:48
* @describe:
*/
public class FlowMapper extends Mapper<LongWritable, Text, Text, FlowBean> {
private Text keyOut = new Text();
FlowBean valueOut = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, FlowBean>.Context context) throws IOException, InterruptedException {
String lineData = value.toString();
String[] datas = lineData.split("\t");
keyOut.set(datas[1]);
valueOut.setUpFlow(Long.parseLong(datas[datas.length - 3]));
valueOut.setDownFlow(Long.parseLong(datas[datas.length - 2]));
valueOut.setSumFlow();
context.write(keyOut, valueOut);
}
}
package com.lhl.mapreduce.writeable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
/**
* @ClassName:
* @author: lei lei
* @date: 2022/08/23/9:49
* @describe:
*/
public class FlowReducer extends Reducer<Text, FlowBean,Text,FlowBean> {
private final FlowBean outValue = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
Long totalUpFlow = 0L;
Long totalDownFlow = 0L;
for (FlowBean flowBean : values) {
totalUpFlow += flowBean.getUpFlow();
totalDownFlow += flowBean.getDownFlow();
}
outValue.setUpFlow(totalUpFlow);
outValue.setDownFlow(totalDownFlow);
outValue.setSumFlow();
context.write(key,outValue);
}
}
package com.lhl.mapreduce.writeable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
/**
* @ClassName:
* @author: lei lei
* @date: 2022/08/23/9:50
* @describe:
*/
public class FlowDriver {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
// 声明配置对象
Configuration conf = new Configuration();
// 声明Job对象
Job job = Job.getInstance(conf);
// 指定当前Job的Mapper 和 Reducer
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);
// 指定Map端输出的key个value的类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
// 指定最终输出结果的key 和 value的类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
// 指定数据的输入输出路径
FileInputFormat.addInputPath(job, new Path("E:\\Test\\phone_data"));
FileOutputFormat.setOutputPath(job, new Path("E:\\Test\\phone_data_out1"));
// 提交Job
job.waitForCompletion(true);
}
}