• 【云原生】EF(filebeat)K 日志收集平台


    一、使用elasticsearch+filebeat+logstash+kibana收集项目指定目录日志
    1、部署es服务(用于数据存储)

    [root@master efk-7.10.2]# cat es-statefulset.yaml 
    # RBAC authn and authz
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: elasticsearch-logging
      namespace: logging
      labels:
        k8s-app: elasticsearch-logging
        addonmanager.kubernetes.io/mode: Reconcile
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: elasticsearch-logging
      labels:
        k8s-app: elasticsearch-logging
        addonmanager.kubernetes.io/mode: Reconcile
    rules:
      - apiGroups:
          - ""
        resources:
          - "services"
          - "namespaces"
          - "endpoints"
        verbs:
          - "get"
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: elasticsearch-logging
      labels:
        k8s-app: elasticsearch-logging
        addonmanager.kubernetes.io/mode: Reconcile
    subjects:
      - kind: ServiceAccount
        name: elasticsearch-logging
        namespace: logging
        apiGroup: ""
    roleRef:
      kind: ClusterRole
      name: elasticsearch-logging
      apiGroup: ""
    ---
    # Elasticsearch deployment itself
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: elasticsearch-logging
      namespace: logging
      labels:
        k8s-app: elasticsearch-logging
        version: v7.10.2
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      serviceName: elasticsearch-logging
      replicas: 1
      selector:
        matchLabels:
          k8s-app: elasticsearch-logging
          version: v7.10.2
      template:
        metadata:
          labels:
            k8s-app: elasticsearch-logging
            version: v7.10.2
        spec:
          serviceAccountName: elasticsearch-logging
          containers:
            - image: registry.cn-beijing.aliyuncs.com/dotbalo/elasticsearch:v7.10.2
              name: elasticsearch-logging
              imagePullPolicy: IfNotPresent 
              resources:
                # need more cpu upon initialization, therefore burstable class
                limits:
                  cpu: 1000m
                  memory: 3Gi
                requests:
                  cpu: 100m
                  memory: 3Gi
              ports:
                - containerPort: 9200
                  name: db
                  protocol: TCP
                - containerPort: 9300
                  name: transport
                  protocol: TCP
              livenessProbe:
                tcpSocket:
                  port: transport
                initialDelaySeconds: 30
                timeoutSeconds: 10
              readinessProbe:
                tcpSocket:
                  port: transport
                initialDelaySeconds: 30
                timeoutSeconds: 10
              volumeMounts:
                - name: elasticsearch-logging
                  mountPath: /data
              env:
                - name: "NAMESPACE"
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: "MINIMUM_MASTER_NODES"
                  value: "1"
          volumes:
            - name: elasticsearch-logging
              emptyDir: {}
          # Elasticsearch requires vm.max_map_count to be at least 262144.
          # If your OS already sets up this number to a higher value, feel free
          # to remove this init container.
          initContainers:
            - image: registry.cn-beijing.aliyuncs.com/dotbalo/alpine:3.6
              command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
              name: elasticsearch-logging-init
              securityContext:
                privileged: true
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120

    1.1、创建es-service服务发现

    [root@master efk-7.10.2]# cat es-service.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: elasticsearch-logging
      namespace: logging
      labels:
        k8s-app: elasticsearch-logging
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/name: "Elasticsearch"
    spec:
      clusterIP: None
      ports:
        - name: db
          port: 9200
          protocol: TCP
          targetPort: 9200
        - name: transport
          port: 9300
          protocol: TCP
          targetPort: 9300
      publishNotReadyAddresses: true
      selector:
        k8s-app: elasticsearch-logging
      sessionAffinity: None
      type: ClusterIP
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27

    1.2、查看es服务是否创建成功

    [root@master efk-7.10.2]# kubectl get pod,svc -n logging|grep elas
    pod/elasticsearch-logging-0           1/1     Running   0          35h
    service/elasticsearch-logging   ClusterIP   None            <none>        9200/TCP,9300/TCP                              35h
    
    • 1
    • 2
    • 3

    2、创建filebeat配置文件(基于elasticsearch存储)

    [root@master filebeat]# cat filebeat-es-cm.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: filebeatconf
    data:
      filebeat.yml: |-
        filebeat.inputs:
        - input_type: log
          paths:
            - /data/log/*/*.log         # 收集日志的路径
          tail_files: true
          fields:
            pod_name: '${podName}'
            pod_ip: '${podIp}'
            pod_deploy_name: '${podDeployName}'
            pod_namespace: '${podNamespace}'
        output.elasticsearch:     # 可以使用kafka,redis做缓存,此处直接将数据存储在elasticsearch中
          hosts: ["10.244.1.100:9200"]    # elasticsearch 的ip+port
          index: "app-%{+yyyy.MM.dd}"     # 索引名称定义
        setup.template.name: "filebeat-sidecar"
        setup.template.pattern: "filebeat-sidecar"
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    2.1、创建filebeat配置文件(基于kafka缓存)
    说明:使用kafka时需要使用logstash做日志的解析

    [root@master filebeat]# cat filebeat-kafka-cm.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: filebeatconf
    data:
      filebeat.yml: |-
        filebeat.inputs:
        - input_type: log
          paths:
            - /data/log/*/*.log
          tail_files: true
          fields:
            pod_name: '${podName}'
            pod_ip: '${podIp}'
            pod_deploy_name: '${podDeployName}'
            pod_namespace: '${podNamespace}'
        output.kafka:
          hosts: ["kafka:9092"]   # kafka地址和端口
          topic: "filebeat-sidecar"
          codec.json:
            pretty: false
          keep_alive: 30s
     说明:可以使用集群外部的kafka地址,直接将ip+port填写即可
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    3、创建filebeat Sidecar(基于elasticsearch的配置)

    [root@master filebeat]# cat app-filebeat-es.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app
      labels:
        app: app
        env: release
    spec:
      selector:
        matchLabels:
          app: app
      replicas: 1
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 0
          maxSurge: 1
      # minReadySeconds: 30
      template:
        metadata:
          labels:
            app: app
        spec:
          containers:
            - name: filebeat                    # filebeat名称
              image: docker.elastic.co/beats/filebeat-oss:7.10.2   # filebeat镜像地址
              resources:    # 配置filebeat资源限制
                requests:
                  memory: "100Mi"
                  cpu: "10m"
                limits:
                  cpu: "200m"
                  memory: "300Mi"
              imagePullPolicy: IfNotPresent
              env:
                - name: podIp
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: status.podIP
                - name: podName
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.name
                - name: podNamespace
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                - name: podDeployName
                  value: app
                - name: TZ
                  value: "Asia/Shanghai"
              securityContext:
                runAsUser: 0
              volumeMounts:
                - name: local-time
                  mountPath: /etc/localtime
                - name: logpath
                  mountPath: /data/log/app/    # 将项目日志映射到filebeat的路径,便于收集项目日志
                - name: filebeatconf
                  mountPath: /usr/share/filebeat/filebeat.yml 
                  subPath: usr/share/filebeat/filebeat.yml
            - name: app     # 项目镜像名称
              image: 192.168.122.150/library/alpine-time:3.6    # 项目镜像地址
              imagePullPolicy: IfNotPresent
              volumeMounts:
                - name: logpath
                  mountPath: /opt/      # 项目日志路径
              env:
                - name: LANG
                  value: C.UTF-8
                - name: LC_ALL
                  value: C.UTF-8
              command:
                - sh
                - -c
                - while true; do date >> /opt/date.log; sleep 2;  done    # while 循环,模拟程序日志输出
          volumes:
            - name: local-time
              hostPath:
                path: /usr/share/zoneinfo/Asia/Shanghai
            - name: logpath
              emptyDir: {}
            - name: filebeatconf
              configMap:
                name: filebeatconf     # 挂载filebeat配置文件
                items:
                  - key: filebeat.yml
                    path: usr/share/filebeat/filebeat.yml
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92

    3.1、创建filebeat Sidecar(基于kafka配置)

    [root@master filebeat]# cat app-filebeat-kafka.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app
      labels:
        app: app
        env: release
    spec:
      selector:
        matchLabels:
          app: app
      replicas: 1
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 0
          maxSurge: 1
      # minReadySeconds: 30
      template:
        metadata:
          labels:
            app: app
        spec:
          containers:
            - name: filebeat                        
              image: registry.cn-beijing.aliyuncs.com/dotbalo/filebeat:7.10.2 
              resources:
                requests:
                  memory: "100Mi"
                  cpu: "10m"
                limits:
                  cpu: "200m"
                  memory: "300Mi"
              imagePullPolicy: IfNotPresent
              env:
                - name: podIp
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: status.podIP
                - name: podName
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.name
                - name: podNamespace
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                - name: podDeployName
                  value: app
                - name: TZ
                  value: "Asia/Shanghai"
              securityContext:
                runAsUser: 0
              volumeMounts:
                - name: logpath
                  mountPath: /data/log/app/
                - name: filebeatconf
                  mountPath: /usr/share/filebeat/filebeat.yml 
                  subPath: usr/share/filebeat/filebeat.yml
            - name: app
              image: registry.cn-beijing.aliyuncs.com/dotbalo/alpine:3.6 
              imagePullPolicy: IfNotPresent
              volumeMounts:
                - name: logpath
                  mountPath: /opt/
              env:
                - name: TZ
                  value: "Asia/Shanghai"
                - name: LANG
                  value: C.UTF-8
                - name: LC_ALL
                  value: C.UTF-8
              command:
                - sh
                - -c
                - while true; do date >> /opt/date.log; sleep 2;  done 
          volumes:
            - name: logpath
              emptyDir: {}
            - name: filebeatconf
              configMap:
                name: filebeatconf
                items:
                  - key: filebeat.yml
                    path: usr/share/filebeat/filebeat.yml
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89

    3.2、查看filebeat是否创建成功

    [root@master filebeat]# kubectl get pod,svc -n logging|grep app
    pod/app-5f4bff79db-xrcrp              2/2     Running   0          21h  #此镜像包含项目镜像和filebeat镜像
    
    • 1
    • 2

    4、创建logstash配置文件

    [root@master filebeat]# cat logstash-cm.yaml 
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: logstash-configmap
    data:
      logstash.yml: |
        http.host: "0.0.0.0"
        path.config: /usr/share/logstash/pipeline
      logstash.conf: |
        # all input will come from filebeat, no local logs
        input {
          kafka {
                  enable_auto_commit => true
                  auto_commit_interval_ms => "1000"
                  bootstrap_servers => "kafka:9092"
                  topics => ["filebeat-sidecar"]
                  type => ["filebeat-sidecar"]
                  codec => json
              }
        }
    
        output {
           stdout{ codec=>rubydebug}
           if [type] == "filebeat-sidecar"{
               elasticsearch {
                 hosts => ["elasticsearch-logging-0.elasticsearch-logging:9200","elasticsearch-logging-1.elasticsearch-logging:9200"]
                 index => "filebeat-%{+YYYY.MM.dd}"
              }
           } else{
              elasticsearch {
                 hosts => ["elasticsearch-logging-0.elasticsearch-logging:9200","elasticsearch-logging-1.elasticsearch-logging:9200"]
                 index => "other-input-%{+YYYY.MM.dd}"
              }
           }
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36

    4.1、创建logstash-service服务发现

    [root@master filebeat]# cat logstash-service.yaml 
    kind: Service
    apiVersion: v1
    metadata:
      name: logstash-service
    spec:
      selector:
        app: logstash
      ports:
      - protocol: TCP
        port: 5044
        targetPort: 5044
      type: ClusterIP
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    4.2、、创建logstash服务

    [root@master filebeat]# cat logstash.yaml 
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: logstash-deployment
    spec:
      selector:
        matchLabels:
          app: logstash
      replicas: 1
      template:
        metadata:
          labels:
            app: logstash
        spec:
          containers:
          - name: logstash
            image: registry.cn-beijing.aliyuncs.com/dotbalo/logstash:7.10.1 
            ports:
            - containerPort: 5044
            volumeMounts:
              - name: config-volume
                mountPath: /usr/share/logstash/config
              - name: logstash-pipeline-volume
                mountPath: /usr/share/logstash/pipeline
          volumes:
          - name: config-volume
            configMap:
              name: logstash-configmap
              items:
                - key: logstash.yml
                  path: logstash.yml
          - name: logstash-pipeline-volume
            configMap:
              name: logstash-configmap
              items:
                - key: logstash.conf
                  path: logstash.conf
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38

    5、创建kibana服务

    [root@master efk-7.10.2]# cat kibana-deployment.yaml 
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: kibana-logging
      namespace: logging
      labels:
        k8s-app: kibana-logging
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      replicas: 1
      selector:
        matchLabels:
          k8s-app: kibana-logging
      template:
        metadata:
          labels:
            k8s-app: kibana-logging
        spec:
          securityContext:
            seccompProfile:
              type: RuntimeDefault
          containers:
            - name: kibana-logging
              image: registry.cn-beijing.aliyuncs.com/dotbalo/kibana-oss:7.10.2
              resources:
                # need more cpu upon initialization, therefore burstable class
                limits:
                  cpu: 1000m
                requests:
                  cpu: 100m
              env:
                - name: ELASTICSEARCH_HOSTS
                  value: http://elasticsearch-logging:9200
                - name: SERVER_NAME
                  value: kibana-logging
                - name: SERVER_BASEPATH
                  value: "/kibana" 
                - name: SERVER_REWRITEBASEPATH
                  value: "true"
              ports:
                - containerPort: 5601
                  name: ui
                  protocol: TCP
              livenessProbe:
                httpGet:
                  path: /kibana/api/status
                  port: ui
                initialDelaySeconds: 5
                timeoutSeconds: 10
              readinessProbe:
                httpGet:
                  path: /kibana/api/status
                  port: ui
                initialDelaySeconds: 5
                timeoutSeconds: 10
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56

    5.1、创建kibana-service服务发现

    [root@master efk-7.10.2]# cat kibana-service.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: kibana-logging
      namespace: logging
      labels:
        k8s-app: kibana-logging
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/name: "Kibana"
    spec:
      ports:
      - port: 5601
        protocol: TCP
        targetPort: ui
      selector:
        k8s-app: kibana-logging
      type: NodePort
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19

    5.2、查看kibana是否创建成功

    [root@master efk-7.10.2]# kubectl get pod,svc -n logging|grep kibana
    pod/kibana-logging-7bf48fb7b4-k98zs   1/1     Running   0          35h
    service/kibana-logging          NodePort    10.98.179.159   <none>        5601:30716/TCP                                 35h
    
    • 1
    • 2
    • 3

    5.3、使用kibana的noteport和k8s主机的ip即可访问kibana页面
    nodeip+nodeport访问kibana页面
    5.3.1、kibana界面配置索引
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    说明:出现以上信息说明efk日志平台正常

  • 相关阅读:
    优先队列---堆
    芯片设计后端遇到的各种文件类型和文件后缀
    Dockerfile 简介
    比Linus更牛逼的程序员,QEMU、FFmpeg的作者
    docker系列7:docker安装ES
    十六进制转八进制 +扩展:进制数转换 蓝桥杯基础练习(难搞的东西)
    MySQL基础篇【聚合函数】
    MySQL 读写分离配置实践
    Java:Java 仍然很棒的7个原因
    C语言-静态通讯录(全功能)(详略版)
  • 原文地址:https://blog.csdn.net/ljx1528/article/details/125465141