🍁博主简介:
🏅云计算领域优质创作者
🏅2022年CSDN新星计划python赛道第一名🏅2022年CSDN原力计划优质作者
🏅阿里云ACE认证高级工程师
🏅阿里云开发者社区专家博主💊交流社区:CSDN云计算交流社区欢迎您的加入!
3、system-config.properties文件配置
部署服务器的ip
可用的9093 9094 9095 2181端口
docker和docker-compose
卸载旧版本(可选)
如果之前安装过旧版本的Docker,可以使用以下命令卸载:
sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine docker-ce
安装依赖
安装yum工具和相关依赖:
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
设置Docker仓库
添加Docker CE的官方yum仓库,为了更快的下载速度,可以使用国内的镜像源,例如阿里云:
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
或者使用清华大学源:
sudo yum-config-manager --add-repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
安装Docker CE
更新yum缓存并安装Docker CE:
- sudo yum makecache fast
- sudo yum install -y docker-ce
启动Docker
启动Docker服务:
sudo systemctl start docker
验证Docker安装
可以通过运行以下命令来验证Docker是否成功安装:
sudo docker --version
下载Docker Compose
使用curl命令从GitHub下载Docker Compose的二进制文件到/usr/local/bin/目录下(请检查版本是否最新):
sudo curl -L "https://github.com/docker/compose/releases/download/v2.6.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
注意:上述命令中的版本号v2.6.0可能不是最新的,请访问Docker Compose的GitHub页面查看最新版本号。
赋予执行权限
为Docker Compose二进制文件赋予执行权限:
sudo chmod +x /usr/local/bin/docker-compose
验证Docker Compose安装
通过运行以下命令来验证Docker Compose是否成功安装:
docker-compose --version
文件内容如下:
- version: '2'
- services:
- zookeeper:
- image: wurstmeister/zookeeper
- restart: always
- ports:
- - "2181:2181"
- kafka1:
- image: wurstmeister/kafka
- restart: always
- depends_on:
- - zookeeper
- ports:
- - "9093:9093"
- environment:
- KAFKA_ADVERTISED_HOST_NAME: kafka1
- KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
- KAFKA_LISTENERS: PLAINTEXT://:9093
- KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9093
- KAFKA_BROKER_ID: 1
- volumes:
- - /var/run/docker.sock:/var/run/docker.sock
- kafka2:
- image: wurstmeister/kafka
- restart: always
- depends_on:
- - zookeeper
- ports:
- - "9094:9094"
- environment:
- KAFKA_ADVERTISED_HOST_NAME: kafka2
- KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
- KAFKA_LISTENERS: PLAINTEXT://:9094
- KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9094
- KAFKA_BROKER_ID: 2
- volumes:
- - /var/run/docker.sock:/var/run/docker.sock
- kafka3:
- image: wurstmeister/kafka
- restart: always
- depends_on:
- - zookeeper
- ports:
- - 9095:9095
- environment:
- KAFKA_ADVERTISED_HOST_NAME: kafka3
- KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
- KAFKA_LISTENERS: PLAINTEXT://:9095
- KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9095
- KAFKA_BROKER_ID: 3
- volumes:
- - /var/run/docker.sock:/var/run/docker.sock
- eagle:
- image: nickzurich/efak:3.0.1
- volumes: # 挂载目录
- - ./system-config.properties:/opt/efak/conf/system-config.properties
- environment: # 配置参数
- EFAK_CLUSTER_ZK_LIST: zookeeper:2181
- depends_on:
- - zookeeper
- ports:
- - "8048:8048"
修改文件中KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9093的ip为服务器ip(有三处)
- ######################################
- # multi zookeeper & kafka cluster list
- # Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead
- ######################################
- efak.zk.cluster.alias=cluster
- cluster.zk.list=zookeeper:2181
-
- ######################################
- # zookeeper enable acl
- ######################################
- cluster.zk.acl.enable=false
- cluster.zk.acl.schema=digest
- cluster.zk.acl.username=test
- cluster.zk.acl.password=test123
-
- ######################################
- # kraft broker
- ######################################
- efak.kafka.cluster.alias=cluster
-
- ######################################
- # broker size online list
- ######################################
- cluster.efak.broker.size=1
-
- ######################################
- # zk client thread limit
- # Zookeeper cluster allows the number of clients to connect to
- ######################################
- kafka.zk.limit.size=25
-
- ######################################
- # EFAK webui port
- ######################################
- efak.webui.port=8048
-
- ######################################
- # kafka jmx acl and ssl authenticate
- ######################################
- cluster.efak.jmx.acl=false
- cluster.efak.jmx.user=keadmin
- cluster.efak.jmx.password=keadmin123
- cluster.efak.jmx.ssl=false
- cluster.efak.jmx.truststore.location=/Users/dengjie/workspace/ssl/certificates/kafka.truststore
- cluster.efak.jmx.truststore.password=ke123456
-
- ######################################
- # kafka offset storage
- ######################################
- cluster.efak.offset.storage=kafka
-
- # If offset is out of range occurs, enable this property -- Only suitable for kafka sql
- efak.sql.fix.error=false
-
- ######################################
- # kafka jmx uri
- ######################################
- cluster.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi
-
- ######################################
- # kafka metrics, 15 days by default
- ######################################
-
- # Whether the Kafka performance monitoring diagram is enabled
- efak.metrics.charts=false
-
- # Kafka Eagle keeps data for 30 days by default
- efak.metrics.retain=30
-
- ######################################
- # kafka sql topic records max
- ######################################
- efak.sql.topic.records.max=5000
- efak.sql.topic.preview.records.max=10
- efak.sql.worknode.port=8787
- efak.sql.distributed.enable=FALSE
- efak.sql.worknode.rpc.timeout=300000
- efak.sql.worknode.fetch.threshold=5000
- efak.sql.worknode.fetch.timeout=20000
- efak.sql.worknode.server.path=/Users/dengjie/workspace/kafka-eagle-plus/kafka-eagle-common/src/main/resources/works
-
- ######################################
- # delete kafka topic token
- # Set to delete the topic token, so that administrators can have the right to delete
- ######################################
- efak.topic.token=keadmin
-
- ######################################
- # kafka sasl authenticate
- ######################################
- cluster.efak.sasl.enable=false
- cluster.efak.sasl.protocol=SASL_PLAINTEXT
- cluster.efak.sasl.mechanism=SCRAM-SHA-256
- cluster.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
- # If not set, the value can be empty
- cluster.efak.sasl.client.id=
- # Add kafka cluster cgroups
- cluster.efak.sasl.cgroup.enable=false
- cluster.efak.sasl.cgroup.topics=kafka_ads01,kafka_ads02
-
- ######################################
- # kafka jdbc driver address
- # Default use sqlite to store data
- ######################################
- efak.driver=org.sqlite.JDBC
- # It is important to note that the '/hadoop/kafka-eagle/db' path must exist.
- efak.url=jdbc:sqlite:/hadoop/efak/db/ke.db
- efak.username=root
- efak.password=smartloli
LICENSE
- MIT License
-
- Copyright (c) 2023 Salent Olivick
-
- Permission is hereby granted, free of charge, to any person obtaining a copy
- of this software and associated documentation files (the "Software"), to deal
- in the Software without restriction, including without limitation the rights
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
- copies of the Software, and to permit persons to whom the Software is
- furnished to do so, subject to the following conditions:
-
- The above copyright notice and this permission notice shall be included in all
- copies or substantial portions of the Software.
-
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- SOFTWARE.
sudo docker-compose up -d
进入eagle即可查看kafka状态http://127.0.0.1:8048/ 用户名密码是admin/123456
