• 【云原生 | 60】Docker中通过docker-compose部署kafka集群


    🍁博主简介
            🏅云计算领域优质创作者
            🏅2022年CSDN新星计划python赛道第一名

            🏅2022年CSDN原力计划优质作者
            🏅阿里云ACE认证高级工程师
            🏅阿里云开发者社区专家博主

    💊交流社区CSDN云计算交流社区欢迎您的加入!

    1、环境准备

    1.1 安装docker

    1.2 安装Docker Compose

    2、docker-compose.yaml文件配置

    3、system-config.properties文件配置

    4、启动服务


    1、环境准备

    • 部署服务器的ip

    • 可用的9093 9094 9095 2181端口

    • docker和docker-compose

    1.1 安装docker

    卸载旧版本(可选)

    • 如果之前安装过旧版本的Docker,可以使用以下命令卸载:

    sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine docker-ce

    安装依赖

    • 安装yum工具和相关依赖:

    sudo yum install -y yum-utils device-mapper-persistent-data lvm2

    设置Docker仓库

    • 添加Docker CE的官方yum仓库,为了更快的下载速度,可以使用国内的镜像源,例如阿里云:

    sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    • 或者使用清华大学源:

    sudo yum-config-manager --add-repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo

    安装Docker CE

    • 更新yum缓存并安装Docker CE:

    1. sudo yum makecache fast  
    2. sudo yum install -y docker-ce

    启动Docker

    • 启动Docker服务:

    sudo systemctl start docker

    验证Docker安装

    • 可以通过运行以下命令来验证Docker是否成功安装:

    sudo docker --version

    1.2 安装Docker Compose

    下载Docker Compose

    • 使用curl命令从GitHub下载Docker Compose的二进制文件到/usr/local/bin/目录下(请检查版本是否最新):

    sudo curl -L "https://github.com/docker/compose/releases/download/v2.6.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    • 注意:上述命令中的版本号v2.6.0可能不是最新的,请访问Docker Compose的GitHub页面查看最新版本号。

    赋予执行权限

    • 为Docker Compose二进制文件赋予执行权限:

    sudo chmod +x /usr/local/bin/docker-compose

    验证Docker Compose安装

    • 通过运行以下命令来验证Docker Compose是否成功安装:

    docker-compose --version

    2、docker-compose.yaml文件配置

    文件内容如下:

    1. version: '2'
    2. services:
    3. zookeeper:
    4.   image: wurstmeister/zookeeper
    5.    restart: always
    6.   ports:
    7.      - "2181:2181"
    8. kafka1:
    9.   image: wurstmeister/kafka
    10.    restart: always
    11.   depends_on:
    12.      - zookeeper
    13.   ports:
    14.      - "9093:9093"
    15.   environment:
    16.     KAFKA_ADVERTISED_HOST_NAME: kafka1
    17.     KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    18.     KAFKA_LISTENERS: PLAINTEXT://:9093
    19.     KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9093
    20.     KAFKA_BROKER_ID: 1
    21.   volumes:
    22.      - /var/run/docker.sock:/var/run/docker.sock
    23. kafka2:
    24.   image: wurstmeister/kafka
    25.    restart: always
    26.   depends_on:
    27.      - zookeeper
    28.   ports:
    29.      - "9094:9094"
    30.   environment:
    31.     KAFKA_ADVERTISED_HOST_NAME: kafka2
    32.     KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    33.     KAFKA_LISTENERS: PLAINTEXT://:9094
    34.     KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9094
    35.     KAFKA_BROKER_ID: 2
    36.   volumes:
    37.      - /var/run/docker.sock:/var/run/docker.sock
    38. kafka3:
    39.   image: wurstmeister/kafka
    40.    restart: always
    41.   depends_on:
    42.      - zookeeper
    43.   ports:
    44.      - 9095:9095
    45.   environment:
    46.     KAFKA_ADVERTISED_HOST_NAME: kafka3
    47.     KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    48.     KAFKA_LISTENERS: PLAINTEXT://:9095
    49.     KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9095
    50.     KAFKA_BROKER_ID: 3
    51.   volumes:
    52.      - /var/run/docker.sock:/var/run/docker.sock
    53. eagle:
    54.   image: nickzurich/efak:3.0.1
    55.   volumes: # 挂载目录
    56.      - ./system-config.properties:/opt/efak/conf/system-config.properties
    57.   environment: # 配置参数
    58.     EFAK_CLUSTER_ZK_LIST: zookeeper:2181
    59.   depends_on:
    60.      - zookeeper
    61.   ports:
    62.      - "8048:8048"

    修改文件中KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9093的ip为服务器ip(有三处)

    3、system-config.properties文件配置

    1. ######################################
    2. # multi zookeeper & kafka cluster list
    3. # Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead
    4. ######################################
    5. efak.zk.cluster.alias=cluster
    6. cluster.zk.list=zookeeper:2181
    7. ######################################
    8. # zookeeper enable acl
    9. ######################################
    10. cluster.zk.acl.enable=false
    11. cluster.zk.acl.schema=digest
    12. cluster.zk.acl.username=test
    13. cluster.zk.acl.password=test123
    14. ######################################
    15. # kraft broker
    16. ######################################
    17. efak.kafka.cluster.alias=cluster
    18. ######################################
    19. # broker size online list
    20. ######################################
    21. cluster.efak.broker.size=1
    22. ######################################
    23. # zk client thread limit
    24. # Zookeeper cluster allows the number of clients to connect to
    25. ######################################
    26. kafka.zk.limit.size=25
    27. ######################################
    28. # EFAK webui port
    29. ######################################
    30. efak.webui.port=8048
    31. ######################################
    32. # kafka jmx acl and ssl authenticate
    33. ######################################
    34. cluster.efak.jmx.acl=false
    35. cluster.efak.jmx.user=keadmin
    36. cluster.efak.jmx.password=keadmin123
    37. cluster.efak.jmx.ssl=false
    38. cluster.efak.jmx.truststore.location=/Users/dengjie/workspace/ssl/certificates/kafka.truststore
    39. cluster.efak.jmx.truststore.password=ke123456
    40. ######################################
    41. # kafka offset storage
    42. ######################################
    43. cluster.efak.offset.storage=kafka
    44. # If offset is out of range occurs, enable this property -- Only suitable for kafka sql
    45. efak.sql.fix.error=false
    46. ######################################
    47. # kafka jmx uri
    48. ######################################
    49. cluster.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi
    50. ######################################
    51. # kafka metrics, 15 days by default
    52. ######################################
    53. # Whether the Kafka performance monitoring diagram is enabled
    54. efak.metrics.charts=false
    55. # Kafka Eagle keeps data for 30 days by default
    56. efak.metrics.retain=30
    57. ######################################
    58. # kafka sql topic records max
    59. ######################################
    60. efak.sql.topic.records.max=5000
    61. efak.sql.topic.preview.records.max=10
    62. efak.sql.worknode.port=8787
    63. efak.sql.distributed.enable=FALSE
    64. efak.sql.worknode.rpc.timeout=300000
    65. efak.sql.worknode.fetch.threshold=5000
    66. efak.sql.worknode.fetch.timeout=20000
    67. efak.sql.worknode.server.path=/Users/dengjie/workspace/kafka-eagle-plus/kafka-eagle-common/src/main/resources/works
    68. ######################################
    69. # delete kafka topic token
    70. # Set to delete the topic token, so that administrators can have the right to delete
    71. ######################################
    72. efak.topic.token=keadmin
    73. ######################################
    74. # kafka sasl authenticate
    75. ######################################
    76. cluster.efak.sasl.enable=false
    77. cluster.efak.sasl.protocol=SASL_PLAINTEXT
    78. cluster.efak.sasl.mechanism=SCRAM-SHA-256
    79. cluster.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
    80. # If not set, the value can be empty
    81. cluster.efak.sasl.client.id=
    82. # Add kafka cluster cgroups
    83. cluster.efak.sasl.cgroup.enable=false
    84. cluster.efak.sasl.cgroup.topics=kafka_ads01,kafka_ads02
    85. ######################################
    86. # kafka jdbc driver address
    87. # Default use sqlite to store data
    88. ######################################
    89. efak.driver=org.sqlite.JDBC
    90. # It is important to note that the '/hadoop/kafka-eagle/db' path must exist.
    91. efak.url=jdbc:sqlite:/hadoop/efak/db/ke.db
    92. efak.username=root
    93. efak.password=smartloli

    LICENSE

    1. MIT License
    2. Copyright (c) 2023 Salent Olivick
    3. Permission is hereby granted, free of charge, to any person obtaining a copy
    4. of this software and associated documentation files (the "Software"), to deal
    5. in the Software without restriction, including without limitation the rights
    6. to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    7. copies of the Software, and to permit persons to whom the Software is
    8. furnished to do so, subject to the following conditions:
    9. The above copyright notice and this permission notice shall be included in all
    10. copies or substantial portions of the Software.
    11. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    12. IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    13. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    14. AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    15. LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    16. OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
    17. SOFTWARE.

    4、启动服务

    sudo docker-compose up -d

    进入eagle即可查看kafka状态http://127.0.0.1:8048/ 用户名密码是admin/123456

  • 相关阅读:
    Linux下查看pytorch运行时真正调用的cuda版本
    @DateTimeFormat 和 @JsonFormat 的详细研究
    字符串的简单介绍和字符串的大小比较
    苍穹外卖--员工分页查询
    Linux Kafka 3.5 KRaft模式集群部署
    react recharts饼图 及配置项
    gitlab的安装与迁移
    【PG】PostgreSQL参数详解(一)
    Linux之(5)账户和shell基础知识
    ArcGIS校园3D展示图制作详细教程
  • 原文地址:https://blog.csdn.net/qq_62294245/article/details/139284989