• Flume实践案例


    目录

    安装flume:http://t.csdn.cn/J1Pnn

    一、采集目录中的新文件到HDFS中

    1.Flume要想将数据输出到HDFS,必须持有Hadoop相关jar包

    2.创建flume-file-hdfs.conf文件

    二、采集文件新增内容到HDFS中

    1.需求分析

    2.实现

     三、多级agent串联

    1.配置hadoop02监听服务端

    2.在其它节点配置监听客户端


    安装flumehttp://t.csdn.cn/J1Pnn

    一、采集目录中的新文件到HDFS

    文档对应说明: Flume 1.7.0 User Guide — Apache Flume

     采集需求:使用flume监听整个目录的文件

    1.Flume要想将数据输出到HDFS,必须持有Hadoop相关jar包

    将commons-configuration-1.6.jar、hadoop-auth-2.7.3.jar、hadoop-common-2.7.3.jar、hadoop-hdfs-2.7.3.jar、commons-io-2.4.jar、htrace-core-3.1.0-incubating.jar拷贝到/opt/module/flume/lib文件夹下。

    1.上面jar包的地址

    1. [root@hadoop01 hadoop]# pwd
    2. /opt/module/hadoop-2.7.3/share/hadoop #上面jar包的地址

    2.commons-configuration-1.6.jar,commons-io-2.4.jar,htrace-core-3.1.0-incubating.jar,hadoop-auth-2.7.3.jar

    1. [root@hadoop01 lib]# pwd
    2. /opt/module/hadoop-2.7.3/share/hadoop/tools/lib
    3. [root@hadoop01 lib]# cp commons-configuration-1.6.jar /opt/module/flume/lib/
    4. [root@hadoop01 lib]# cp commons-io-2.4.jar /opt/module/flume/lib/
    5. [root@hadoop01 lib]# cp htrace-core-3.1.0-incubating.jar /opt/module/flume/lib/
    6. [root@hadoop01 lib]# cp hadoop-auth-2.7.3.jar /opt/module/flume/lib/

    3.hadoop-common-2.7.3.jar

    1. [root@hadoop01 common]# pwd
    2. /opt/module/hadoop-2.7.3/share/hadoop/common
    3. [root@hadoop01 common]# cp hadoop-common-2.7.3.jar /opt/module/flume/lib/

    4.hadoop-hdfs-2.7.3.jar

    1. [root@hadoop01 hdfs]# pwd
    2. /opt/module/hadoop-2.7.3/share/hadoop/hdfs
    3. [root@hadoop01 hdfs]# cp hadoop-hdfs-2.7.3.jar /opt/module/flume/lib/

    2.创建flume-file-hdfs.conf文件

    1.创建文件

    [root@hadoop01 flume]# touch dir-hdfs.conf

    2.编辑文件 

    1. [root@hadoop01 flume]# vim dir-hdfs.conf
    2. #定义三大组件的名称
    3. ag1.sources = source1
    4. ag1.sinks = sink1
    5. ag1.channels = channel1
    6. # 配置source组件
    7. ag1.sources.source1.type = spooldir
    8. ag1.sources.source1.spoolDir = /root/log/
    9. ag1.sources.source1.fileSuffix=.FINISHED
    10. #ag1.sources.source1.deserializer.maxLineLength=5120
    11. # 配置sink组件
    12. ag1.sinks.sink1.type = hdfs
    13. ag1.sinks.sink1.hdfs.path =hdfs://hadoop01:9000/access_log/%y-%m-%d/%H-%M
    14. ag1.sinks.sink1.hdfs.filePrefix = app_log
    15. ag1.sinks.sink1.hdfs.fileSuffix = .log
    16. ag1.sinks.sink1.hdfs.batchSize= 100
    17. ag1.sinks.sink1.hdfs.fileType = DataStream
    18. ag1.sinks.sink1.hdfs.writeFormat =Text
    19. ## roll:滚动切换:控制写文件的切换规则
    20. ## 按文件体积(字节)来切
    21. ag1.sinks.sink1.hdfs.rollSize = 512000
    22. ## 按event条数切
    23. ag1.sinks.sink1.hdfs.rollCount = 1000000
    24. ## 按时间间隔切换文件
    25. ag1.sinks.sink1.hdfs.rollInterval = 60
    26. ## 控制生成目录的规则
    27. ag1.sinks.sink1.hdfs.round = true
    28. ag1.sinks.sink1.hdfs.roundValue = 10
    29. ag1.sinks.sink1.hdfs.roundUnit = minute
    30. ag1.sinks.sink1.hdfs.useLocalTimeStamp = true
    31. # channel组件配置
    32. ag1.channels.channel1.type = memory
    33. ## event条数
    34. ag1.channels.channel1.capacity = 500000
    35. ##flume事务控制所需要的缓存容量600条event
    36. ag1.channels.channel1.transactionCapacity = 600
    37. # 绑定source、channel和sink之间的连接
    38. ag1.sources.source1.channels = channel1
    39. ag1.sinks.sink1.channel = channel1

    3.启动Flume采集命令(测试阶段下使用)

    [root@hadoop01 flume]# bin/flume-ng agent -c conf -f dir-hdfs.conf -n ag1 -Dflume.root.logger=INFO,console

    重要说明: 在使用Spooling Directory Source

    1)    不要在监控目录中创建并持续修改文件

    2)    采集完成的文件会以.FINISHED结尾

    3)    被监控文件夹每600毫秒扫描一次文件变动

    二、采集文件新增内容到HDFS中

    采集需求:比如业务系统使用log4j生成日志,日志内容不断添加,需要把追加的日志实时采集到hdfs存储.

    1.需求分析

    根据需求首先定义以下3大要素:

    采集源: 即是source--监控文件内容的更新: exec 'tail -F file'

    下沉目标: 即是sink --HDFS文件系统 : hdfs sink

    source与sink的传递通道-- channel, 可用file channel 也可以用内存channel.

    2.实现

    1. [root@hadoop01 ~]# cd /root/log/
    2. [root@hadoop01 log]# touch access.log
    3. [root@hadoop01 flume]# touch file-hdfs.conf
    4. [root@hadoop01 flume]# vim file-hdfs.conf
    5. #定义三大组件的名称
    6. ag1.sources = source1
    7. ag1.sinks = sink1
    8. ag1.channels = channel1
    9. # 配置source组件
    10. ag1.sources.source1.type = exec
    11. ag1.sources.source1.command = tail -F /root/log/access.log
    12. # 配置sink组件
    13. ag1.sinks.sink1.type = hdfs
    14. ag1.sinks.sink1.hdfs.path =hdfs://hadoop01:9000/access_log/%y-%m-%d
    15. ag1.sinks.sink1.hdfs.filePrefix = app_log
    16. ag1.sinks.sink1.hdfs.fileSuffix = .log
    17. ag1.sinks.sink1.hdfs.batchSize= 100
    18. ag1.sinks.sink1.hdfs.fileType = DataStream
    19. ag1.sinks.sink1.hdfs.writeFormat =Text
    20. ## roll:滚动切换:控制写文件的切换规则
    21. ## 按文件体积(字节)来切
    22. ag1.sinks.sink1.hdfs.rollSize = 512000
    23. ## 按event条数切
    24. ag1.sinks.sink1.hdfs.rollCount = 1000000
    25. ## 按时间间隔切换文件
    26. ag1.sinks.sink1.hdfs.rollInterval = 60
    27. ## 控制生成目录的规则
    28. ag1.sinks.sink1.hdfs.round = true
    29. ag1.sinks.sink1.hdfs.roundValue = 10
    30. ag1.sinks.sink1.hdfs.roundUnit = minute
    31. ag1.sinks.sink1.hdfs.useLocalTimeStamp = true
    32. # channel组件配置
    33. ag1.channels.channel1.type = memory
    34. ## event条数
    35. ag1.channels.channel1.capacity = 500000
    36. ##flume事务控制所需要的缓存容量600条event
    37. ag1.channels.channel1.transactionCapacity = 600
    38. # 绑定source、channel和sink之间的连接
    39. ag1.sources.source1.channels = channel1
    40. ag1.sinks.sink1.channel = channel1

    3.写入数据测试

      

     三、多级agent串联

    1.配置hadoop02监听服务端

    1.配置文件avro-hdfs.conf

    1. [root@hadoop02 flume]# vim avro-hdfs.conf
    2. #定义三大组件的名称
    3. ag1.sources = source1
    4. ag1.sinks = sink1
    5. ag1.channels = channel1
    6. # 配置source组件,该source的avro组件是一个接收者的服务
    7. ag1.sources.source1.type = avro
    8. ag1.sources.source1.bind = hadoop02
    9. ag1.sources.source1.port = 4141
    10. # 配置sink组件
    11. ag1.sinks.sink1.type = hdfs
    12. ag1.sinks.sink1.hdfs.path =hdfs://hadoop01:9000/flume/taildata/%y-%m-%d/
    13. ag1.sinks.sink1.hdfs.filePrefix = tail-
    14. ag1.sinks.sink1.hdfs.round = true
    15. ag1.sinks.sink1.hdfs.roundValue = 24
    16. ag1.sinks.sink1.hdfs.roundUnit = hour
    17. ag1.sinks.sink1.hdfs.rollInterval = 0
    18. ag1.sinks.sink1.hdfs.rollSize = 0
    19. ag1.sinks.sink1.hdfs.rollCount = 50
    20. ag1.sinks.sink1.hdfs.batchSize = 10
    21. ag1.sinks.sink1.hdfs.useLocalTimeStamp = true
    22. #生成的文件类型,默认是Sequencefile,可用DataStream,则为普通文本
    23. ag1.sinks.sink1.hdfs.fileType = DataStream
    24. # channel组件配置
    25. ag1.channels.channel1.type = memory
    26. ## event条数
    27. ag1.channels.channel1.capacity = 1000
    28. ##flume事务控制所需要的缓存容量100条event
    29. ag1.channels.channel1.transactionCapacity = 100
    30. # 绑定source、channel和sink之间的连接
    31. ag1.sources.source1.channels = channel1
    32. ag1.sinks.sink1.channel = channel1

    2.启动

    [root@hadoop02 flume]# bin/flume-ng agent -c conf -f avro-hdfs.conf -n ag1 -Dflume.root.logger=INFO,console 

    3.查看是否启动监听

    [root@hadoop02 ~]# netstat -nltp

    4.查看flume进程详情

    [root@hadoop02 ~]# jps -m

    2.在其它节点配置监听客户端

    1.配置文件tail-avro.conf

    1. [root@hadoop01 flume]# vim tail-avro.conf
    2. #定义三大组件的名称
    3. ag1.sources = source1
    4. ag1.sinks = sink1
    5. ag1.channels = channel1
    6. # 配置source组件
    7. ag1.sources.source1.type = exec
    8. ag1.sources.source1.command = tail -F /root/log/access.log
    9. # 配置sink组件
    10. ag1.sinks.sink1.type = avro
    11. ag1.sinks.sink1.hostname = hadoop02
    12. ag1.sinks.sink1.port = 4141
    13. ag1.sinks.sink1.batch-size = 2
    14. # channel组件配置
    15. ag1.channels.channel1.type = memory
    16. ## event条数
    17. ag1.channels.channel1.capacity = 1000
    18. ##flume事务控制所需要的缓存容量100条event
    19. ag1.channels.channel1.transactionCapacity = 100
    20. # 绑定source、channel和sink之间的连接
    21. ag1.sources.source1.channels = channel1
    22. ag1.sinks.sink1.channel = channel1

    2.启动进行测试

    [root@hadoop01 flume]#  bin/flume-ng agent -c conf -f tail-avro.conf -n ag1 -Dflume.root.logger=INFO,console

    3.发送数据测试

    [root@hadoop01 flume]# while true; do echo `date` >> access.log; sleep 0.1; done

    4.查看数据

  • 相关阅读:
    C++ Reference: Standard C++ Library reference: Others: iterator: begin
    PMP每日一练 | 考试不迷路-9.13(包含敏捷+多选)
    前端文件、图片直传OOS、分片上传、el-upload上传(vue+elementUI)
    刷爆力扣之第三大的数
    MybatisPlus rewriteBatchedStatements=true 批量插入失效,依然是单条插入问题解决
    基于PHP MYSQL的化妆品店会员管理网站的设计与实现毕业设计源码131102
    算术运算的元方法 Metamethods
    三面(技术+HR面试)网易,分享我的面试经验!(已拿offer)
    Vector | Graph:蚂蚁首个开源Graph RAG框架设计解读
    echarts-liquidfill水球图教程
  • 原文地址:https://blog.csdn.net/m0_55834564/article/details/126900831