• 基于nfs动态供给pv搭建ELK集群


    Kubernetes基于nfs存储实现动态供给PV搭建ELK集群

    一、集群基本架构

    IP主机名备注
    11.0.1.3master1
    11.0.1.4master2
    11.0.1.5master3
    11.0.1.6node1
    11.0.1.7node2
    11.0.1.8nfs

    二、搭建k8s-1.19.20

    使用sealos一键搭建或二进制搭建均可

    三、搭建nfs服务

    创建NFS共享服务

    安装nfs-utils和rpcbind

    nfs客户端和服务端都安装nfs-utils包

    yum install nfs-utils rpcbind
    
    • 1

    创建共享目录

    mkdir -p /nfsdata
    chmod 777 /nfsdata
    
    • 1
    • 2

    编辑/etc/exports文件添加如下内容

    vi /etc/exports
    
    /nfsdata *(rw,sync,no_root_squash)
    
    • 1
    • 2
    • 3

    nfs权限说明

    • ro 只读
    • rw 可读写
    • sync 同步写数据,保证数据不丢失
    • async 异步写数据,在写入持久化存储之前进行请求响应,如果服务器重启可能会导致文件丢失或者损坏
    • root_squash 将root用户(uid/gid 0)的请求映射为匿名用户(anonymous uid/gid)
    • no_root_squash 禁用root_squash规则
    • all_squash 将所有用户都映射为匿名用户
    • no_all_squash 禁用all_squash规则,默认选项
    • anonuid 指定要映射为匿名用户的uid,例如:anonuid=150
    • anongid 指定要映射为匿名用户的gid,例如:anongid=100

    启动服务

    systemctl enable rpcbind.service --now
    systemctl enable nfs.service --now
    
    • 1
    • 2

    启动顺序一定是rpcbind->nfs,否则有可能出现错误

    四、实现动态创建pv

    创建StorageClass

    因为StorageClass可以实现自动配置,所以使用StorageClass之前,我们需要先安装存储驱动的自动配置程序,而这个配置程序必须拥有一定的权限去访问我们的kubernetes集群(类似dashboard一样,必须有权限访问各种api,才能实现管理)。

    创建rbac(Role-Based Access Control:基于角色的访问控制):

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      namespace: elk
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
      namespace: elk
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
        ###此处需要注意的是,如果你的名称空间是default,可以不加下面这个授权###
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: elk
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
        # replace with namespace where provisioner is deployed
      namespace: elk
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: elk
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66

    创建StorageClass

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: master-nfs-storage
    provisioner: master-nfs-storage #这里的名称要和下面的provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
    parameters: 
      archiveOnDelete: "false"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    创建自动配置程序 - NFS客户端

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      namespace: elk
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nfs-client-provisioner
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: master-nfs-storage
                - name: NFS_SERVER
                  value: 11.0.1.8
                - name: NFS_PATH
                  value: /nfsdata
          volumes:
            - name: nfs-client-root
              nfs:
                server: 11.0.1.8
                path: /nfsdata
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38

    创建测试pod,检查是否部署成功

    创建PVC

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test-service-pvc
      annotations:
        volume.beta.kubernetes.io/storage-provisioner: master-nfs-storage
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: master-nfs-storage
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    accessModes访问模式说明

    ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射
    ReadOnlyMany -- 该volume可以被多个节点以只读方式映射
    ReadWriteMany -- 该volume可以被多个节点以读写的方式映射
    
    • 1
    • 2
    • 3

    查看pvc状态是否为Bound

    kubectl get pvc --all-namespaces
    NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
    test-service-pvc   Bound    pvc-aae2b7fa-377b-11ea-87ad-525400512eca   1Gi        RWX            master-nfs-storage   2m48s
    
    • 1
    • 2
    • 3

    创建测试pod,查看是否可以正常挂载

    kind: Pod
    apiVersion: v1
    metadata:
      name: test-pod
      namespace: elk
    spec:
      containers:
      - name: test-pod
        image: busybox:1.24
        command:
          - "/bin/sh"
        args:
          - "-c"
          - "touch /mnt/SUCCESS && exit 0 || exit 1"   #创建一个SUCCESS文件后退出
        volumeMounts:
          - name: nfs-pvc
            mountPath: "/mnt"
      restartPolicy: "Never"
      volumes:
        - name: test-pod-nfs-pvc
          persistentVolumeClaim:
            claimName: test-service-pvc
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    此时在nfs文件夹/nfsdata下应以多出一个名字为default-test-service-pvc-pvc-aae2b7fa-377b-11ea-87ad-525400512eca的文件夹

    关于StorageClass回收策略对数据的影响

    1.第一种配置

       archiveOnDelete: "false"  
       reclaimPolicy: Delete   #默认没有配置,默认值为Delete
    
    • 1
    • 2

    测试结果:

    1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3.删除PVC后,PV被删除且NFS Server对应数据被删除
    
    • 1
    • 2
    • 3

    2.第二种配置

       archiveOnDelete: "false"  
       reclaimPolicy: Retain  
    
    • 1
    • 2

    测试结果:

    1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
    4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
    
    • 1
    • 2
    • 3
    • 4

    3.第三种配置

       archiveOnDelete: "ture"  
       reclaimPolicy: Retain  
    
    • 1
    • 2

    结果:

    1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
    4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
    
    • 1
    • 2
    • 3
    • 4

    4.第四种配置

      archiveOnDelete: "ture"  
      reclaimPolicy: Delete  
    
    • 1
    • 2

    结果:

    1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
    4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
    
    • 1
    • 2
    • 3
    • 4

    五、搭建ES集群

    创建master节点的yaml文件

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      namespace: elk
      name: elasticsearch-master
      labels:
        app: elasticsearch
        role: master
    spec:
      serviceName: elasticsearch-master
      replicas: 3
      selector:
        matchLabels:
          app: elasticsearch
          role: master
      template:
        metadata:
          labels:
            app: elasticsearch
            role: master
        spec:
          containers:
            - name: elasticsearch
              image: elasticsearch:7.16.2
              command: ["bash", "-c", "ulimit -l unlimited && sysctl -w vm.max_map_count=262144 && chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && exec su elasticsearch docker-entrypoint.sh"]
              ports:
                - containerPort: 9200
                  name: http
                - containerPort: 9300
                  name: transport
              env:
                - name: discovery.seed_hosts
                  value: "elasticsearch-master-0.elasticsearch-master,elasticsearch-master-1.elasticsearch-master,elasticsearch-master-2.elasticsearch-master,elasticsearch-data-0.elasticsearch-data,elasticsearch-data-1.elasticsearch-data,elasticsearch-data-2.elasticsearch-data,elasticsearch-data-3.elasticsearch-data,elasticsearch-data-4.elasticsearch-data,elasticsearch-client-0.elasticsearch-client,elasticsearch-client-1.elasticsearch-client,elasticsearch-client-2.elasticsearch-client"
                - name: cluster.initial_master_nodes
                  value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
                - name: ES_JAVA_OPTS
                  value: -Xms512m -Xmx512m
                - name: node.master
                  value: "true"
                - name: node.ingest
                  value: "false"
                - name: node.data
                  value: "false"
                - name: cluster.name
                  value: "elasticsearch"
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: xpack.security.enabled
                  value: "true"
                - name: xpack.security.transport.ssl.enabled
                  value: "true"
                - name: xpack.monitoring.collection.enabled
                  value: "true"
                - name: xpack.security.transport.ssl.verification_mode
                  value: "certificate"
                - name: xpack.security.transport.ssl.keystore.path
                  value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
                - name: xpack.security.transport.ssl.truststore.path
                  value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
    
              volumeMounts:
               - mountPath: /usr/share/elasticsearch/data
                 name: pv-storage-elastic-master
               - name: elastic-certificates
                 readOnly: true
                 mountPath: "/usr/share/elasticsearch/config/elastic-certificates.p12"
                 subPath: elastic-certificates.p12
               - mountPath: /etc/localtime
                 name: localtime
              securityContext:
                privileged: true
          volumes:
          - name: elastic-certificates
            secret:
              secretName: elastic-certificates
          - hostPath:
              path: /etc/localtime
            name: localtime
    
      volumeClaimTemplates:
      - metadata:
          name: pv-storage-elastic-master
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "master-nfs-storage"
          resources:
            requests:
              storage: 1Gi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      namespace: elk
      name: elasticsearch-master
      labels:
        app: elasticsearch
        role: master
    spec:
      selector:
        app: elasticsearch
        role: master
      type: NodePort
      ports:
      - port: 9200
        nodePort: 30001
        targetPort: 9200
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108

    创建data节点的yaml文件

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      namespace: elk
      name: elasticsearch-data
      labels:
        app: elasticsearch
        role: data
    spec:
      serviceName: elasticsearch-data
      replicas: 5
      selector:
        matchLabels:
          app: elasticsearch
          role: data
      template:
        metadata:
          labels:
            app: elasticsearch
            role: data
        spec:
          containers:
            - name: elasticsearch
              image: elasticsearch:7.16.2
              command: ["bash", "-c", "ulimit -l unlimited && sysctl -w vm.max_map_count=262144 && chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && exec su elasticsearch docker-entrypoint.sh"]
              ports:
                - containerPort: 9200
                  name: http
                - containerPort: 9300
                  name: transport
              env:
                - name: discovery.seed_hosts
                  value: "elasticsearch-master-0.elasticsearch-master,elasticsearch-master-1.elasticsearch-master,elasticsearch-master-2.elasticsearch-master,elasticsearch-data-0.elasticsearch-data,elasticsearch-data-1.elasticsearch-data,elasticsearch-data-2.elasticsearch-data,elasticsearch-data-3.elasticsearch-data,elasticsearch-data-4.elasticsearch-data,elasticsearch-client-0.elasticsearch-client,elasticsearch-client-1.elasticsearch-client,elasticsearch-client-2.elasticsearch-client"
                - name: cluster.initial_master_nodes
                  value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
                - name: ES_JAVA_OPTS
                  value: -Xms512m -Xmx512m
                - name: node.master
                  value: "false"
                - name: node.ingest
                  value: "false"
                - name: node.data
                  value: "true"
                - name: cluster.name
                  value: "elasticsearch"
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: xpack.security.enabled
                  value: "true"
                - name: xpack.security.transport.ssl.enabled
                  value: "true"
                - name: xpack.monitoring.collection.enabled
                  value: "true"
                - name: xpack.security.transport.ssl.verification_mode
                  value: "certificate"
                - name: xpack.security.transport.ssl.keystore.path
                  value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
                - name: xpack.security.transport.ssl.truststore.path
                  value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
    
              volumeMounts:
               - mountPath: /usr/share/elasticsearch/data
                 name: pv-storage-elastic-data
               - name: elastic-certificates
                 readOnly: true
                 mountPath: "/usr/share/elasticsearch/config/elastic-certificates.p12"
                 subPath: elastic-certificates.p12
               - mountPath: /etc/localtime
                 name: localtime
    
              securityContext:
                privileged: true
    
          volumes:
          - name: elastic-certificates
            secret:
              secretName: elastic-certificates
          - hostPath:
              path: /etc/localtime
            name: localtime
    
      volumeClaimTemplates:
      - metadata:
          name: pv-storage-elastic-data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "master-nfs-storage"
          resources:
            requests:
              storage: 2Gi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      namespace: elk
      name: elasticsearch-data
      labels:
        app: elasticsearch
        role: data
    spec:
      selector:
        app: elasticsearch
        role: data
      type: NodePort
      ports:
      - port: 9200
        nodePort: 30002
        targetPort: 9200
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110

    创建client节点的yaml文件:

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      namespace: elk
      name: elasticsearch-client
      labels:
        app: elasticsearch
        role: client
    spec:
      serviceName: elasticsearch-client
      replicas: 3
      selector:
        matchLabels:
          app: elasticsearch
          role: client
      template:
        metadata:
          labels:
            app: elasticsearch
            role: client
        spec:
          containers:
            - name: elasticsearch
              image: elasticsearch:7.16.2
              command: ["bash", "-c", "ulimit -l unlimited && sysctl -w vm.max_map_count=262144 && chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && exec su elasticsearch docker-entrypoint.sh"]
              ports:
                - containerPort: 9200
                  name: http
                - containerPort: 9300
                  name: transport
              env:
                - name: discovery.seed_hosts
                  value: "elasticsearch-master-0.elasticsearch-master,elasticsearch-master-1.elasticsearch-master,elasticsearch-master-2.elasticsearch-master,elasticsearch-data-0.elasticsearch-data,elasticsearch-data-1.elasticsearch-data,elasticsearch-data-2.elasticsearch-data,elasticsearch-data-3.elasticsearch-data,elasticsearch-data-4.elasticsearch-data,elasticsearch-client-0.elasticsearch-client,elasticsearch-client-1.elasticsearch-client,elasticsearch-client-2.elasticsearch-client"
                - name: cluster.initial_master_nodes
                  value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
                - name: ES_JAVA_OPTS
                  value: -Xms512m -Xmx512m
                - name: node.master
                  value: "false"
                - name: node.ingest
                  value: "true"
                - name: node.data
                  value: "false"
                - name: cluster.name
                  value: "elasticsearch"
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: xpack.security.enabled
                  value: "true"
                - name: xpack.security.transport.ssl.enabled
                  value: "true"
                - name: xpack.monitoring.collection.enabled
                  value: "true"
                - name: xpack.security.transport.ssl.verification_mode
                  value: "certificate"
                - name: xpack.security.transport.ssl.keystore.path
                  value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
                - name: xpack.security.transport.ssl.truststore.path
                  value: "/usr/share/elasticsearch/config/elastic-certificates.p12"
    
              volumeMounts:
               - mountPath: /usr/share/elasticsearch/data
                 name: pv-storage-elastic-client
               - name: elastic-certificates
                 readOnly: true
                 mountPath: "/usr/share/elasticsearch/config/elastic-certificates.p12"
                 subPath: elastic-certificates.p12
               - mountPath: /etc/localtime
                 name: localtime
    
              securityContext:
                privileged: true
    
          volumes:
          - name: elastic-certificates
            secret:
              secretName: elastic-certificates
          - hostPath:
              path: /etc/localtime
            name: localtime
    
      volumeClaimTemplates:
      - metadata:
          name: pv-storage-elastic-client
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "master-nfs-storage"
          resources:
            requests:
              storage: 1Gi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      namespace: elk
      name: elasticsearch-client
      labels:
        app: elasticsearch
        role: client
    spec:
      selector:
        app: elasticsearch
        role: client
      type: NodePort
      ports:
      - port: 9200
        nodePort: 30003
        targetPort: 9200
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110

    设置ES集群的密码,密码要牢记!!!

    kubectl -n elk exec -it $(kubectl -n elk get pods | grep elasticsearch-master | sed -n 1p | awk '{print $1}') -- bin/elasticsearch-setup-passwords auto -b
    
    • 1

    六、部署kibana服务

    使用上面生成的用户 elastic 的密码 03sWFWzGOjNOCioqcbV3创建secret

    kubectl -n elk create secret generic elasticsearch-password --from-literal password=03sWFWzGOjNOCioqcbV3
    
    • 1

    创建kibana的yaml文件

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: elk
      name: kibana-config
      labels:
        app: kibana
    data:
      kibana.yml: |-
        server.host: 0.0.0.0
        elasticsearch:
          hosts: ${ELASTICSEARCH_HOSTS}
          username: ${ELASTICSEARCH_USER}
          password: ${ELASTICSEARCH_PASSWORD}
    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        app: kibana
      name: kibana
      namespace: elk
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: kibana
      template:
        metadata:
          labels:
            app: kibana
        spec:
          nodeSelector:
            node: node2
          containers:
            - name: kibana
              image: kibana:7.16.2
              ports:
                - containerPort: 5601
                  protocol: TCP
              env:
                - name: SERVER_PUBLICBASEURL
                  value: "http://0.0.0.0:5601"
                - name: I18N.LOCALE
                  value: zh-CN
                - name: ELASTICSEARCH_HOSTS
                  value: "http://elasticsearch-client:9200"
                - name: ELASTICSEARCH_USER
                  value: "elastic"
                - name: ELASTICSEARCH_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: elasticsearch-password
                      key: password
                - name: xpack.encryptedSavedObjects.encryptionKey
                  value: "min-32-byte-long-strong-encryption-key"
    
              volumeMounts:
              - name: kibana-config
                mountPath: /usr/share/kibana/config/kibana.yml
                readOnly: true
                subPath: kibana.yml
              - mountPath: /etc/localtime
                name: localtime
          volumes:
          - name: kibana-config
            configMap:
              name: kibana-config
          - hostPath:
              path: /etc/localtime
            name: localtime
    ---
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        app: kibana
      name: kibana-service
      namespace: elk
    spec:
      ports:
      - port: 5601
        targetPort: 5601
        nodePort: 30004
      type: NodePort
      selector:
        app: kibana
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88

    七、搭建kafka集群

    部署zookeeper集群:

    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper
      namespace: elk
      labels:
        app: zookeeper
    spec:
      type: NodePort
      ports:
      - port: 2181
        nodePort: 30005
        targetPort: 2181
      selector:
        app: zookeeper
    ---
    apiVersion: policy/v1beta1
    kind: PodDisruptionBudget
    metadata:
      name: zookeeper-pdb
      namespace: elk
    spec:
      selector:
        matchLabels:
          app: zookeeper
      minAvailable: 2
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: zookeeper
      namespace: elk
    spec:
      selector:
        matchLabels:
          app: zookeeper
      serviceName: zookeeper
      replicas: 3
      updateStrategy:
        type: RollingUpdate
      podManagementPolicy: Parallel
      updateStrategy:
        type: RollingUpdate
      template:
        metadata:
          labels:
            app: zookeeper
        spec:
          containers:
          - name: kubernetes-zookeeper
            imagePullPolicy: IfNotPresent
            image: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10"
            ports:
            - containerPort: 2181
              name: client
            - containerPort: 2888
              name: server
            - containerPort: 3888
              name: leader-election
            command:
            - sh
            - -c
            - "start-zookeeper \
              --servers=3 \
              --data_dir=/var/lib/zookeeper/data \
              --data_log_dir=/var/lib/zookeeper/data/log \
              --conf_dir=/opt/zookeeper/conf \
              --client_port=2181 \
              --election_port=3888 \
              --server_port=2888 \
              --tick_time=2000 \
              --init_limit=10 \
              --sync_limit=5 \
              --heap=512M \
              --max_client_cnxns=60 \
              --snap_retain_count=3 \
              --purge_interval=12 \
              --max_session_timeout=40000 \
              --min_session_timeout=4000 \
              --log_level=INFO"
            readinessProbe:
              exec:
                command:
                - sh
                - -c
                - "zookeeper-ready 2181"
              initialDelaySeconds: 10
              timeoutSeconds: 5
            livenessProbe:
              exec:
                command:
                - sh
                - -c
                - "zookeeper-ready 2181"
              initialDelaySeconds: 10
              timeoutSeconds: 5
            volumeMounts:
            - name: zookeeper
              mountPath: /var/lib/zookeeper
          securityContext:
            runAsUser: 1000
            fsGroup: 1000
      volumeClaimTemplates:
      - metadata:
          name: zookeeper
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "master-nfs-storage"
          resources:
            requests:
              storage: 1Gi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111

    创建kafka的yaml文件:

    [root@master1 elk]# cat 07-kafka.yaml 
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kafka
      namespace: elk
      labels:
        app: kafka
    spec:
      type: NodePort
      ports:
      - port: 9092
        nodePort: 30006
        targetPort: 9092
      selector:
        app: kafka
    ---
    apiVersion: policy/v1beta1
    kind: PodDisruptionBudget
    metadata:
      name: kafka-pdb
      namespace: elk
    spec:
      selector:
        matchLabels:
          app: kafka
      maxUnavailable: 1
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: kafka
      namespace: elk
    spec:
      selector:
         matchLabels:
            app: kafka
      serviceName: kafka
      replicas: 3
      template:
        metadata:
          labels:
            app: kafka
        spec:
          terminationGracePeriodSeconds: 300
          containers:
          - name: k8s-kafka
            imagePullPolicy: IfNotPresent
            image: fastop/kafka:2.2.0
            resources:
              requests:
                memory: "600Mi"
                cpu: 500m
            ports:
            - containerPort: 9092
              name: server
            command:
            - sh
            - -c
            - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
              --override listeners=PLAINTEXT://:9092 \
              --override zookeeper.connect=zookeeper.elk.svc.cluster.local:2181 \
              --override log.dir=/var/lib/kafka \
              --override auto.create.topics.enable=true \
              --override auto.leader.rebalance.enable=true \
              --override background.threads=10 \
              --override compression.type=producer \
              --override delete.topic.enable=false \
              --override leader.imbalance.check.interval.seconds=300 \
              --override leader.imbalance.per.broker.percentage=10 \
              --override log.flush.interval.messages=9223372036854775807 \
              --override log.flush.offset.checkpoint.interval.ms=60000 \
              --override log.flush.scheduler.interval.ms=9223372036854775807 \
              --override log.retention.bytes=-1 \
              --override log.retention.hours=168 \
              --override log.roll.hours=168 \
              --override log.roll.jitter.hours=0 \
              --override log.segment.bytes=1073741824 \
              --override log.segment.delete.delay.ms=60000 \
              --override message.max.bytes=1000012 \
              --override min.insync.replicas=1 \
              --override num.io.threads=8 \
              --override num.network.threads=3 \
              --override num.recovery.threads.per.data.dir=1 \
              --override num.replica.fetchers=1 \
              --override offset.metadata.max.bytes=4096 \
              --override offsets.commit.required.acks=-1 \
              --override offsets.commit.timeout.ms=5000 \
              --override offsets.load.buffer.size=5242880 \
              --override offsets.retention.check.interval.ms=600000 \
              --override offsets.retention.minutes=1440 \
              --override offsets.topic.compression.codec=0 \
              --override offsets.topic.num.partitions=50 \
              --override offsets.topic.replication.factor=3 \
              --override offsets.topic.segment.bytes=104857600 \
              --override queued.max.requests=500 \
              --override quota.consumer.default=9223372036854775807 \
              --override quota.producer.default=9223372036854775807 \
              --override replica.fetch.min.bytes=1 \
              --override replica.fetch.wait.max.ms=500 \
              --override replica.high.watermark.checkpoint.interval.ms=5000 \
              --override replica.lag.time.max.ms=10000 \
              --override replica.socket.receive.buffer.bytes=65536 \
              --override replica.socket.timeout.ms=30000 \
              --override request.timeout.ms=30000 \
              --override socket.receive.buffer.bytes=102400 \
              --override socket.request.max.bytes=104857600 \
              --override socket.send.buffer.bytes=102400 \
              --override unclean.leader.election.enable=true \
              --override zookeeper.session.timeout.ms=6000 \
              --override zookeeper.set.acl=false \
              --override broker.id.generation.enable=true \
              --override connections.max.idle.ms=600000 \
              --override controlled.shutdown.enable=true \
              --override controlled.shutdown.max.retries=3 \
              --override controlled.shutdown.retry.backoff.ms=5000 \
              --override controller.socket.timeout.ms=30000 \
              --override default.replication.factor=1 \
              --override fetch.purgatory.purge.interval.requests=1000 \
              --override group.max.session.timeout.ms=300000 \
              --override group.min.session.timeout.ms=6000 \
              --override inter.broker.protocol.version=2.2.0 \
              --override log.cleaner.backoff.ms=15000 \
              --override log.cleaner.dedupe.buffer.size=134217728 \
              --override log.cleaner.delete.retention.ms=86400000 \
              --override log.cleaner.enable=true \
              --override log.cleaner.io.buffer.load.factor=0.9 \
              --override log.cleaner.io.buffer.size=524288 \
              --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
              --override log.cleaner.min.cleanable.ratio=0.5 \
              --override log.cleaner.min.compaction.lag.ms=0 \
              --override log.cleaner.threads=1 \
              --override log.cleanup.policy=delete \
              --override log.index.interval.bytes=4096 \
              --override log.index.size.max.bytes=10485760 \
              --override log.message.timestamp.difference.max.ms=9223372036854775807 \
              --override log.message.timestamp.type=CreateTime \
              --override log.preallocate=false \
              --override log.retention.check.interval.ms=300000 \
              --override max.connections.per.ip=2147483647 \
              --override num.partitions=4 \
              --override producer.purgatory.purge.interval.requests=1000 \
              --override replica.fetch.backoff.ms=1000 \
              --override replica.fetch.max.bytes=1048576 \
              --override replica.fetch.response.max.bytes=10485760 \
              --override reserved.broker.max.id=1000 "
            env:
            - name: KAFKA_HEAP_OPTS
              value : "-Xmx512M -Xms512M"
            - name: KAFKA_OPTS
              value: "-Dlogging.level=INFO"
            volumeMounts:
            - name: kafka
              mountPath: /var/lib/kafka
            readinessProbe:
              tcpSocket:
                port: 9092
              timeoutSeconds: 1
              initialDelaySeconds: 5
          securityContext:
            runAsUser: 1000
            fsGroup: 1000
      volumeClaimTemplates:
      - metadata:
          name: kafka
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "master-nfs-storage"
          resources:
            requests:
              storage: 1Gi
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172

    通过zookeeper查看broker

    [root@master1 elk]# kubectl exec -it zookeeper-1 -n elk bash
    kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
    zookeeper@zookeeper-1:/$  zkCli.sh 
    Connecting to localhost:2181
    
    [zk: localhost:2181(CONNECTED) 0] get /brokers/ids/0
    {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-0.kafka.elk.svc.cluster.local:9093"],"jmx_port":-1,"host":"kafka-0.kafka.elk.svc.cluster.local","timestamp":"1641887271398","port":9093,"version":4}
    cZxid = 0x200000024
    ctime = Tue Jan 11 07:47:51 UTC 2022
    mZxid = 0x200000024
    mtime = Tue Jan 11 07:47:51 UTC 2022
    pZxid = 0x200000024
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x27e480b276e0001
    dataLength = 246
    numChildren = 0
    [zk: localhost:2181(CONNECTED) 1] get /brokers/ids/1
    {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-1.kafka.elk.svc.cluster.local:9093"],"jmx_port":-1,"host":"kafka-1.kafka.elk.svc.cluster.local","timestamp":"1641887242316","port":9093,"version":4}
    cZxid = 0x20000001e
    ctime = Tue Jan 11 07:47:22 UTC 2022
    mZxid = 0x20000001e
    mtime = Tue Jan 11 07:47:22 UTC 2022
    pZxid = 0x20000001e
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x27e480b276e0000
    dataLength = 246
    numChildren = 0
    [zk: localhost:2181(CONNECTED) 2] get /brokers/ids/2
    {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-2.kafka.elk.svc.cluster.local:9093"],"jmx_port":-1,"host":"kafka-2.kafka..svc.cluster.local","timestamp":"1641888604437","port":9093,"version":4}
    cZxid = 0x20000002d
    ctime = Tue Jan 11 08:10:04 UTC 2022
    mZxid = 0x20000002d
    mtime = Tue Jan 11 08:10:04 UTC 2022
    pZxid = 0x20000002d
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x27e480b276e0002
    dataLength = 246
    numChildren = 0
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45

    (2)kafka生产消费测试

    创建topic

    [root@master1 elk]# kubectl exec -it kafka-0 -n elk sh
    kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
    $ pwd
    /
    $ cd /opt/kafka/bin
    $ ./kafka-topics.sh --create --topic test --zookeeper zookeeper.elk.svc.cluster.local:2181 --partitions 3 --replication-factor 3    
    Created topic "test".
    $ ./kafka-topics.sh --list --zookeeper zookeeper.elk.svc.cluster.local:2181
    test
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    生产消息:

    生产消息:

    $ ./kafka-console-producer.sh --topic test --broker-list kafka-0.kafka.elk.svc.cluster.local:9093
    111
    
    • 1
    • 2

    另起一个终端消费消息:

    $ ./kafka-console-consumer.sh --topic test --zookeeper zookeeper.elk.svc.cluster.local:2181 --from-beginning
    Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
    111
    
    • 1
    • 2
    • 3

    消费是正常的!

    八、部署logstash

    [root@master1 elk]# cat 09-logstash.yaml 
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: logstash-configmap
      namespace: elk
    data:
      logstash.yml: |
        http.host: "0.0.0.0"
        path.config: /usr/share/logstash/pipeline
      logstash.conf: |
        input {
          kafka {
            bootstrap_servers => "kafka-0.kafka.ns-elk.svc.cluster.local:9092,kafka-1.kafka.ns-elk.svc.cluster.local:9092,kafka-2.kafka.ns-elk.svc.cluster.local:9092"
            topics => ["filebeat"]
            codec => "json"
          }
        }
        filter {
          date {
            match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
          }
        }
          output {
            elasticsearch {
              hosts => ["elasticsearch-client:9200"]
              user => "elastic"
              password => "lGTiRY1ZcChmlNpr5AFX"
              index => "kubernetes-%{+YYYY.MM.dd}"
          }
        }
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: logstash-deployment
      namespace: elk
    spec:
      selector:
        matchLabels:
          app: logstash
      replicas: 1
      template:
        metadata:
          labels:
            app: logstash
        spec:
          nodeSelector:
            node: node2
          containers:
          - name: logstash
            image: docker.elastic.co/logstash/logstash:7.16.2
            ports:
            - containerPort: 5044
            volumeMounts:
              - name: config-volume
                mountPath: /usr/share/logstash/config
              - name: logstash-pipeline-volume
                mountPath: /usr/share/logstash/pipeline
              - mountPath: /etc/localtime
                name: localtime
          volumes:
          - name: config-volume
            configMap:
              name: logstash-configmap
              items:
                - key: logstash.yml
                  path: logstash.yml
          - name: logstash-pipeline-volume
            configMap:
              name: logstash-configmap
              items:
                - key: logstash.conf
                  path: logstash.conf
          - hostPath:
              path: /etc/localtime
            name: localtime
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: logstash-service
      namespace: elk
    spec:
      selector:
        app: logstash
      type: NodePort
      ports:
      - protocol: TCP
        port: 5044
        targetPort: 5044
        nodePort: 30007
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93

    九、部署filebeat

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: filebeat-config
      namespace: elk
      labels:
        k8s-app: filebeat
    data:
      filebeat.yml: |
        filebeat.inputs:
        - type: container
          paths:
            - '/var/lib/docker/containers/*/*.log'
          processors:
            - add_kubernetes_metadata:
                host: ${NODE_NAME}
                matchers:
                - logs_path:
                    logs_path: "/var/lib/docker/containers/"
        processors:
          - add_cloud_metadata:
          - add_host_metadata:
        output:
          kafka:
            enabled: true
            hosts: ["kafka-0.kafka.elk.svc.cluster.local:9092","kafka-1.kafka.elk.svc.cluster.local:9092","kafka-2.kafka.elk.svc.cluster.local:9092"]
            topic: "filebeat"
            max_message_bytes: 5242880
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: filebeat
      namespace: elk
      labels:
        k8s-app: filebeat
    spec:
      selector:
        matchLabels:
          k8s-app: filebeat
      template:
        metadata:
          labels:
            k8s-app: filebeat
        spec:
          serviceAccountName: filebeat
          terminationGracePeriodSeconds: 30
          hostNetwork: true
          dnsPolicy: ClusterFirstWithHostNet
          containers:
          - name: filebeat
            image: docker.elastic.co/beats/filebeat:7.16.2
            args: [
              "-c", "/etc/filebeat.yml",
              "-e",
            ]
            env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            securityContext:
              runAsUser: 0
            resources:
              limits:
                memory: 200Mi
              requests:
                cpu: 100m
                memory: 100Mi
            volumeMounts:
            - name: config
              mountPath: /etc/filebeat.yml
              readOnly: true
              subPath: filebeat.yml
            - name: data
              mountPath: /usr/share/filebeat/data
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
          volumes:
          - name: config
            configMap:
              defaultMode: 0640
              name: filebeat-config
          - name: varlibdockercontainers
            hostPath:
              path: /var/lib/docker/containers
          - name: data
            hostPath:
              path: /var/lib/filebeat-data
              type: DirectoryOrCreate
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: filebeat
    subjects:
    - kind: ServiceAccount
      name: filebeat
      namespace: elk
    roleRef:
      kind: ClusterRole
      name: filebeat
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: filebeat
      namespace: elk
    subjects:
      - kind: ServiceAccount
        name: filebeat
        namespace: elk
    roleRef:
      kind: Role
      name: filebeat
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: filebeat-kubeadm-config
      namespace: elk
    subjects:
      - kind: ServiceAccount
        name: filebeat
        namespace: elk
    roleRef:
      kind: Role
      name: filebeat-kubeadm-config
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: filebeat
      labels:
        k8s-app: filebeat
    rules:
    - apiGroups: [""] # "" indicates the core API group
      resources:
      - namespaces
      - pods
      - nodes
      verbs:
      - get
      - watch
      - list
    - apiGroups: ["apps"]
      resources:
        - replicasets
      verbs: ["get", "list", "watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: filebeat
      namespace: elk
      labels:
        k8s-app: filebeat
    rules:
      - apiGroups:
          - coordination.k8s.io
        resources:
          - leases
        verbs: ["get", "create", "update"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: filebeat-kubeadm-config
      namespace: elk
      labels:
        k8s-app: filebeat
    rules:
      - apiGroups: [""]
        resources:
          - configmaps
        resourceNames:
          - kubeadm-config
        verbs: ["get"]
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: filebeat
      namespace: elk
      labels:
        k8s-app: filebeat
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190

    此次部署跟我们上次(kubernetes上部署ELK集群_你说咋整就咋整的博客-CSDN博客_k8s部署elk)部署相比,简化了创建PV的过程,因为有了动态创建PV的功能,后面每一步搭建的时候,都不需要单独创建PV了。

  • 相关阅读:
    自定义MVC(导成jar包)+与三层架构的区别+反射+面试题
    Nginx之 location 详解
    聊聊 K8S:K8S集群搭建实战
    C++IO流详解
    『Halcon与C#混合编程』004_窗体交互Drawing
    2022.11.09第6次Javaweb上机——实现登录欢迎页面
    移动Web
    中国移动杭州公司——亚运会网络运行保障系统
    设计模式(三十一)----综合应用-自定义Spring框架-自定义Spring IOC-定义解析器、IOC容器相关类
    spdlog日志库的封装使用
  • 原文地址:https://blog.csdn.net/weixin_43334786/article/details/128155767