Kafka是由Apache软件基金会开发的一个开源流处理平台,由Scala和Java编写。Kafka是一种基于Zookeeper协调的分布式发布订阅消息系统,它可以处理消费者在网站中的所有动作流数据、WEB 日志、消息服务等
| 操作系统 | IP | 主机名 |
|---|---|---|
| CentOS Linux release 7.5.1804 (Core) | 192.168.169.10 | kafka-broker1 |
| CentOS Linux release 7.5.1804 (Core) | 192.168.169.20 | kafka-broker2 |
| CentOS Linux release 7.6.1810 (Core) | 192.168.169.30 | kafka-broker3 |
1、关闭防火墙,selinux
[root@kafka-broker3 ~]# setenforce 0
[root@kafka-broker3 ~]# sed -ri 's/^(SELINUX=).*/\1disable/g' /etc/selinux/config
[root@kafka-broker3 ~]# systemctl stop firewalld
[root@kafka-broker3 ~]# systemctl disable firewalld
2、kafka官网下载安装包,上传至集群每个节点并解压,kafka官网链接
[root@kafka-broker3 home]# tar xf kafka_2.11-1.1.0.tgz
[root@kafka-broker3 home]# mv kafka_2.11-1.1.0 kafka
[root@kafka-broker3 home]# ls
kafka kafka_2.11-1.1.0.tgz
3、修改kafka 主配置文件,集群内每个节点修改参数不一致,参考下面修改参数说明
[root@kafka-broker3 home]# cat /home/kafka/config/server.properties |egrep -v "^#|^$"
broker.id=3
listeners=PLAINTEXT://192.168.169.30:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.169.10:2181,192.168.169.20:2181,192.168.169.30:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
#kafka broker id 唯一标识符,集群内每个节点不一致
broker.id=3
#kafka PLAINTEXT为传输协议,监听地址及端口,监听在本地即可, Consumer、producer连接地址,根据Consumer、producer 网络环境不同配置也会发生变化
listeners=PLAINTEXT://192.168.169.30:9092
#这里说下这个参数,为公网监听地址,根据Consumer、producer 网络环境不同配置也会发生变化,这里网络环境都是内网可以不启用
#advertised.listeners=PLAINTEXT://your.host.name:9092
#kafka 日志及数据存储路径,kafka 可将数据持久化到硬盘上
log.dirs=/home/kafka/logs
#kafka 连接zookeeper 集群,多节点用逗号分隔
zookeeper.connect=192.168.169.10:2181,192.168.169.20:2181,192.168.169.30:2181
4、在部署所有节点上创建日志、数据存储目录
[root@kafka-broker3 kafka]# mkdir logs
5、 启动服务,kafka 支持横向热扩展,后续增加或减少集群节点数量不影响现有环境
[root@kafka-broker3 ~]# sh /home/kafka/bin/kafka-server-start.sh -daemon /home/kafka/config/server.properties &
[root@kafka-broker2 ~]# sh /home/kafka/bin/kafka-server-start.sh -daemon /home/kafka/config/server.properties &
[root@kafka-broker1 ~]# sh /home/kafka/bin/kafka-server-start.sh -daemon /home/kafka/config/server.properties &
6、登录到zookeeper 集群内任意一个节点,查看kafka 是否注册到zookeper
[root@kafka-broker3 ~]# sh /home/zookeeper/apache-zookeeper-3.7.1-bin/bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 8] ls /brokers/ids
[1, 2, 3]