• 手把手教你搭建zookeeper和kafka集群(超级详细)


    一、环境准备

    1、准备3台机器

    主机名称主机IPzookeeper版本kafka版本
    worker01192.168.19.130zookeeper-3.6.3kafka_2.12-3.0.1
    worker02192.168.19.131zookeeper-3.6.3kafka_2.12-3.0.1
    worker03192.168.19.132zookeeper-3.6.3kafka_2.12-3.0.1

    2、3台机器安装jdk1.8环境

    3、下载kafka安装包(此处下载,可忽略第二步:下载安装包): 

     kafka_2.12-3.0.1.tgz
    

    4、下载zookeeper安装包(此处下载,可忽略第二步:下载安装包):

    apache-zookeeper-3.6.3-bin.tar.gz
    

    二、下载安装包(官方下载,仅供参考)

      1、登录官方:Index of /

      2、下载zookeeper安装包,apache-zookeeper-3.6.3-bin.tar.gz

      3、下载kafka安装包,kafka_2.12-3.0.1.tgz

    三、环境配置

    1、修改主机名(分别单独执行)

    1. # 修改主机名,在worker01上执行
    2. hostnamectl set-hostname worker01
    3. bash # 不重启,使修改主机名生效
    4. # 修改主机名,在worker02上执行
    5. hostnamectl set-hostname worker02
    6. bash # 不重启,使修改主机名生效
    7. # 修改主机名,在worker03上执行
    8. hostnamectl set-hostname worker03
    9. bash # 不重启,使修改主机名生效

    2、关闭防火墙(worker01、worker02、worker02都执行)

    1. systemctl stop firewalld
    2. systemctl disable firewalld
    3. systemctl status firewalld

    3、配置/etc/hosts(worker01、worker02、worker02都执行)

    1. cat >> /etc/hosts << EOF
    2. 192.168.19.130 worker01
    3. 192.168.19.131 worker02
    4. 192.168.19.132 worker03
    5. EOF

    四、安装 jdk

    1、查询仓库jdk软件包(worker01、worker02、worker02都执行)

    1. # 查询jdk软件包
    2. yum list |grep jdk
    3. # 安装系统对应版本
    4. yum install -y java-1.8.0-openjdk-devel.x86_64
    5. # 查看jdk是否配置生效
    6. java -version
    7. # 后续可以用jps查看,主机启动的所有java进程
    8. jps

     

    五、安装zookeeper集群

    1、解压目录(worker01、worker02、worker03,都要执行)

    1. # 创建目录
    2. mkdir -p /home/kafaka-zookeeper
    3. cd /home/kafaka-zookeeper
    4. # 上传安装包apache-zookeeper-3.6.3-bin.tar.gz到该目录下
    5. # 解压
    6. tar -zxvf apache-zookeeper-3.6.3-bin.tar.gz
    7. # 重命名
    8. mv apache-zookeeper-3.6.3-bin apache-zookeeper-3.6.3

     2、修改配置文件(worker01、worker02、worker03,都执行)

    1. # 创建目录
    2. mkdir -p /home/kafaka-zookeeper/apache-zookeeper-3.6.3/{data,logs}
    3. # 修改配置文件名
    4. cd /home/kafaka-zookeeper/apache-zookeeper-3.6.3/conf
    5. mv zoo_sample.cfg zoo.cfg

    3、修改配置文件(worker01、worker02、worker03,都要执行)

    1. vi zoo.cfg
    2. tickTime=2000
    3. initLimit=10
    4. syncLimit=5
    5. dataDir=/home/kafaka-zookeeper/apache-zookeeper-3.6.3/data
    6. dataLogDir=/home/kafaka-zookeeper/apache-zookeeper-3.6.3/logs
    7. clientPort=2181
    8. server.1=192.168.19.130:2888:3888
    9. server.2=192.168.19.131:2888:3888
    10. server.3=192.168.19.132:2888:3888

    4、创建myid

    1. # 在 worker01、worker02、worker03 上都执行
    2. cd /home/kafaka-zookeeper/apache-zookeeper-3.6.3/data
    3. # 在 worker01、worker02、worker03 上都执行
    4. touch myid
    5. # 只在 192.168.19.130 worker01 上执行
    6. echo 1 > /home/kafaka-zookeeper/apache-zookeeper-3.6.3/data/myid
    7. # 只在192.168.19.131 worker02 上执行
    8. echo 2 > /home/kafaka-zookeeper/apache-zookeeper-3.6.3/data/myid
    9. # 只在192.168.19.132 worker03 上执行
    10. echo 3 > /home/kafaka-zookeeper/apache-zookeeper-3.6.3/data/myid

    5、zookeeper启动、停止

    1. # zookeeper 启动
    2. /home/kafaka-zookeeper/apache-zookeeper-3.6.3/bin/zkServer.sh start
    3. # zookeeper 停止
    4. /home/kafaka-zookeeper/apache-zookeeper-3.6.3/bin/zkServer.sh stop
    5. # zookeeper 查看状态
    6. /home/kafaka-zookeeper/apache-zookeeper-3.6.3/bin/zkServer.sh status
    7. # 3台机器的状态
    8. mode:follower # 从
    9. mode:leader # 主
    10. mode:follower # 从
    11. # 运行客户端(后面介绍kafka注册)
    12. /home/kafaka-zookeeper/apache-zookeeper-3.6.3/bin/zkCli.sh -server worker0x的IP:2181

    3台主机,需要全部执行启动命令,查看zookeeper启动状态,如果正常选主,就启动成功了;反之,失败,需要检查日志 dataLogDir=/home/kafaka-zookeeper/apache-zookeeper-3.6.3/logs。

    六、安装kafka集群

    1、解压目录(worker01、worker02、worker03,都要执行)

    1. # 进入目录
    2. cd /home/kafaka-zookeeper
    3. # 上传kafka_2.12-3.0.1.tgz到该目录下
    4. # 解压安装包kafka_2.12-3.0.1.tgz
    5. tar -zxvf kafka_2.12-3.0.1.tgz

    2、修改配置(worker01、worker02、worker03,都要执行)

    1. # 创建目录
    2. mkdir -p /home/kafaka-zookeeper/kafka_2.12-3.0.1/{data,logs}
    1. # 进入kafka_2.12-3.0.1的配置文件目录
    2. cd /home/kafaka-zookeeper/kafka_2.12-3.0.1/config
    3. # 备份配置文件
    4. cp server.properties server.properties.bak
    5. # 删除注释行
    6. sed -i "/#/d" server.properties
    1. vim server.properties
    2. # worker01 192.168.19.130 配置文件修改
    3. broker.id=1
    4. listeners=PLAINTEXT://192.168.19.130:9092
    5. num.network.threads=12
    6. num.io.threads=24
    7. socket.send.buffer.bytes=102400
    8. socket.receive.buffer.bytes=102400
    9. socket.request.max.bytes=104857600
    10. log.dirs=/home/kafaka-zookeeper/kafka_2.12-3.0.1/logs
    11. num.partitions=3
    12. num.recovery.threads.per.data.dir=12
    13. offsets.topic.replication.factor=3
    14. transaction.state.log.replication.factor=3
    15. transaction.state.log.min.isr=3
    16. log.retention.hours=168
    17. log.segment.bytes=1073741824
    18. log.retention.check.interval.ms=300000
    19. zookeeper.connect=192.168.19.130:2181,192.168.19.131:2181,192.168.19.132:2181
    20. zookeeper.connection.timeout.ms=18000
    21. group.initial.rebalance.delay.ms=0
    22. # 其它2台配置
    23. # worker02 192.168.19.131 配置文件修改
    24. broker.id=2
    25. listeners=PLAINTEXT://192.168.19.131:9092
    26. log.dirs=/home/kafaka-zookeeper/kafka_2.12-3.0.1/logs
    27. num.partitions=3
    28. zookeeper.connect=192.168.19.130:2181,192.168.19.131:2181,192.168.19.132:2181
    29. # worker03 192.168.19.132 配置文件修改
    30. broker.id=3
    31. listeners=PLAINTEXT://192.168.19.132:9092
    32. log.dirs=/home/kafaka-zookeeper/kafka_2.12-3.0.1/logs
    33. num.partitions=3
    34. zookeeper.connect=192.168.19.130:2181,192.168.19.131:2181,192.168.19.132:2181

    3、启动kafka集群

    1. # 启动kafka
    2. nohup /home/kafaka-zookeeper/kafka_2.12-3.0.1/bin/kafka-server-start.sh /home/kafaka-zookeeper/kafka_2.12-3.0.1/config/server.properties &>> /home/kafaka-zookeeper/kafka_2.12-3.0.1/logs &

    4、验证kafka集群

    1. cd /home/kafaka-zookeeper/kafka_2.12-3.0.1/bin
    2. # 创建 kafka topic hkzs
    3. ./kafka-topics.sh --create --bootstrap-server 192.168.19.130:9092,192.168.19.131:9092,192.168.19.132:9092 --replication-factor 3 --partitions 3 --topic zbqy
    4. # 列出所有的topic
    5. ./kafka-topics.sh --list --bootstrap-server 192.168.19.130:9092

    1. # 在worker01 192.168.19.130 上发布消息
    2. ./kafka-console-producer.sh --broker-list 192.168.19.130:9092 --topic hkzs
    3. >我过的挺好
    4. >zoo
    5. # 在worker02 192.168.19.131 上消费
    6. ./kafka-console-consumer.sh --bootstrap-server 192.168.19.131:9092 --topic hkzs --from-beginning
    7. 我过的挺好
    8. zoo
    9. # 在worker02 192.168.19.132 上消费
    10. ./kafka-console-consumer.sh --bootstrap-server 192.168.19.132:9092 --topic hkzs --from-beginning
    11. 我过的挺好
    12. zoo

    5、验证zookeeper集群

    1. cd /home/kafaka-zookeeper/apache-zookeeper-3.6.3/bin
    2. # 进入zookeeper客户端,如果是自定义端口一定要 -server 指定IP:port,否则默认进入2181端口
    3. ./zkCli.sh -server 192.168.19.131:2181
    4. # 查看服务
    5. WatchedEvent state:SyncConnected type:None path:null
    6. [zk: 192.168.19.131:2181(CONNECTED) 0] ls /brokers/ids
    7. [1, 2, 3]
    8. [zk: 192.168.19.131:2181(CONNECTED) 1] ls /brokers/topics
    9. [__consumer_offsets, hkzs]
    10. # 进入zookeeper客户端,不是自定义
    11. ./zkCli.sh
    12. [zk: localhost:2181(CONNECTED) 1] ls /
    13. [admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
    14. [zk: localhost:2181(CONNECTED) 2] ls /config
    15. [brokers, changes, clients, ips, topics, users]
    16. [zk: localhost:2181(CONNECTED) 3] ls /config/brokers
    17. []
    18. [zk: localhost:2181(CONNECTED) 4] ls /config/topics
    19. [__consumer_offsets, hkzs, zbqy]

  • 相关阅读:
    ArcGIS QGIS学习二:图层如何只显示需要的部分几何面数据(附最新坐标边界下载全国省市区县乡镇)
    傻瓜式制作产品图册,一秒就能学会
    初识C++ (五)
    Redis
    物联网AI MicroPython传感器学习 之 MDL0025心率传感器
    XGBoost算法讲解和公式推导
    Vue笔记_01双向数据绑定原理
    随缘记录一些MySQL问题
    又又又反转啦!OpenAI吵架吃瓜复盘(1);天才Ilya的精神世界;谷歌帝国的辉煌与腐朽;微软中文提示词教程;Agent是什么 | ShowMeAI日报
    SQLite利用事务实现批量插入(提升效率)
  • 原文地址:https://blog.csdn.net/zhiboqingyun/article/details/126573407