• Kubernetes基于helm部署Kafka_Kraft集群并取消SASL认证且开启数据持久化


    注:本文档部署Kafka时,取消了默认的SASL认证的相关配置,并开启了数据持久化。

    一、添加并更新Helm仓库

    helm repo add bitnami  https://charts.bitnami.com/bitnami
    helm repo update bitnami
    

    二、下载并解压kafka的Chart

    helm pull bitnami/kafka
    tar -xf kafka-29.3.14.tgz
    

    三、修改values.yml

    下面为修改后的示例:
    我已经删除了多余的注释和默认的配置,仅保留修改后的必要的内容

    image:
      # 镜像仓库,原为docker.io/bitnami/kafka:3.7.1-debian-12-r4
      registry: registry.cn-hangzhou.aliyuncs.com
      repository: zhaoll/kafka
      tag: 3.7.1-debian-12-r4
      pullPolicy: IfNotPresent
    heapOpts: -Xmx1024m -Xms1024m
    listeners:
      client:
        containerPort: 9092
        name: CLIENT
        protocol: PLAINTEXT  # 原为SASL_PLAINTEXT,因为我不需要访问Kafka时进行认证,所以改为PLAINTEXT
        sslClientAuth: none  # 原为"",因为我不需要访问Kafka时进行认证,所以改为none
      controller:
        name: CONTROLLER
        containerPort: 9093
        protocol: PLAINTEXT  # 同上
        sslClientAuth: none  # 同上
      interbroker:
        containerPort: 9094
        name: INTERNAL
        protocol: PLAINTEXT  # 同上
        sslClientAuth: none  # 同上
      external:
        containerPort: 9095
        name: EXTERNAL
        protocol: PLAINTEXT  # 同上
        sslClientAuth: none  # 同上
      extraListeners: []
      overrideListeners: ""
      advertisedListeners: ""
      securityProtocolMap: ""
    sasl: {}  #把sasl下面的大段给注释掉或者删除,并将这里改为{}
    controller:
      replicaCount: 3
      controllerOnly: false
      minId: 0
      zookeeperMigrationMode: false
      heapOpts: -Xmx1024m -Xms1024m
      persistence:
        enabled: true  # 设为true,启用持久化存储
        storageClass: "nfs-client"  # 这里配置自己的storageClass,用于自动创建PV和PVC
        accessModes:
          - ReadWriteOnce
        size: 1Gi  # 大小自己决定
        mountPath: /bitnami/kafka
      logPersistence:
        enabled: true  # 启用日志持久化
        storageClass: "nfs-client"  # 这里配置自己的storageClass,用于自动创建PV和PVC
        accessModes:
          - ReadWriteOnce
        size: 1Gi  # 大小自己决定
        mountPath: /opt/bitnami/kafka/logs
    
    service:
      type: NodePort  #这里原为ClusterIP,我们改为NodePort
      ports:
        client: 9092
        controller: 9093
        interbroker: 9094
        external: 9095
      nodePorts:
        client: ""
        external: ""
      externalTrafficPolicy: Cluster
      headless:
        controller:
          annotations: {}
          labels: {}
        broker:
          annotations: {}
          labels: {}
    kraft:
      enabled: true  # 启用Kraft,不依赖zookeeper建立集群。
      existingClusterIdSecret: ""
      clusterId: ""
      controllerQuorumVoters: ""
    
    

    四、开始安装

    [root@master1 kafka]# helm install kafka -f values.yaml bitnami/kafka
    

    大致解释一下上面输出的内容:

    [root@master1 kafka]# helm install kafka -f values.yaml bitnami/kafka
    # 安装信息,主要是kafka的版本和Chart版本
    NAME: kafka
    LAST DEPLOYED: Tue Aug  6 16:30:13 2024
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    CHART NAME: kafka
    CHART VERSION: 29.3.14
    APP VERSION: 3.7.1
    
    ** Please be patient while the chart is being deployed **
    
    # 可以通过kafka.default.svc.cluster.local名称加上9092端口在k8s集群内部访问访问Kafka集群
    Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
    
        kafka.default.svc.cluster.local
    
    # 生产者可以通过下面三个节点名称来访问Kafka
    Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
    
        kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092
        kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092
        kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092
    
    # 创建一个pod用于访问Kafka
    To create a pod that you can use as a Kafka client run the following commands:
        # 这是创建pod的命令
        kubectl run kafka-client --restart='Never' --image registry.cn-hangzhou.aliyuncs.com/zhaoll/kafka:3.7.1-debian-12-r4 --namespace default --command -- sleep infinity
        # 这是进入pod内部
        kubectl exec --tty -i kafka-client --namespace default -- bash
    
        PRODUCER:
        # 在Pod里执行下面这行命令,开启一个生产者,用于发送消息
            kafka-console-producer.sh \
                --broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \
                --topic test
    
        CONSUMER:
        # 在Pod里执行下面这行命令,开启一个消费者,用于接收消息
            kafka-console-consumer.sh \
                --bootstrap-server kafka.default.svc.cluster.local:9092 \
                --topic test \
                --from-beginning
    # 默认镜像被替换了,会在此出现警告
    Substituted images detected:
      - registry.cn-hangzhou.aliyuncs.com/zhaoll/kafka:3.7.1-debian-12-r4
    

    按照上面的提示,创建一个测试用的Pod:

    kubectl run kafka-client --restart='Never' --image registry.cn-hangzhou.aliyuncs.com/zhaoll/kafka:3.7.1-debian-12-r4 --namespace default --command -- sleep infinity
    

    安装完成后,查看Pod和SVC

    [root@master1 kafka]# kubectl get pod,svc
    NAME                                          READY   STATUS    RESTARTS         AGE
    pod/kafka-client                              1/1     Running   0                3m
    pod/kafka-controller-0                        1/1     Running   0                4m
    pod/kafka-controller-1                        1/1     Running   0                4m
    pod/kafka-controller-2                        1/1     Running   0                4m
    pod/nfs-client-provisioner-6f5897fd65-28qlw   1/1     Running   0                15d
    
    NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
    service/kafka                       NodePort    10.108.113.8           9092:32476/TCP               4m
    service/kafka-controller-headless   ClusterIP   None                   9094/TCP,9092/TCP,9093/TCP   4m
    service/kubernetes                  ClusterIP   10.96.0.1              443/TCP                      18d
    
    

    六、创建生产者和消费者进行测试

    生产者:

    [root@master1 kafka]# kubectl exec --tty -i kafka-client --namespace default -- bash
    I have no name!@kafka-client:/$ kafka-console-producer.sh \
    >             --broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \
    >             --topic test
    >我超级牛逼的
    >
    

    消费者:

    [root@master1 kafka]# kubectl exec --tty -i kafka-client --namespace default -- bash
    I have no name!@kafka-client:/$ kafka-console-consumer.sh \
    >             --bootstrap-server kafka.default.svc.cluster.local:9092 \
    >             --topic test \
    >             --from-beginning
    我超级牛逼的
    

    以上,部署完成

  • 相关阅读:
    第二证券今日投资参考:苹果WWDC大会开幕 地产板块再迎催化
    基于matlab目标雷达横截面建模(附源码)
    2023-10-06 LeetCode每日一题(买卖股票的最佳时机含手续费)
    数据治理-GDPR准则
    处理问题,心态崩了?论一个程序员的基本素养
    Java多线程的不同实现方式
    第七章集合与字典作业
    A × B Problem(高精度计算)
    【C++进阶(四)】STL大法--list深度剖析&list迭代器问题探讨
    说说Flink双流join
  • 原文地址:https://blog.csdn.net/weixin_43334786/article/details/140958903