• Prometheus跨集群采集


    背景

    恩不想搭建太多prometheus了,想用一个prometheus,当然了 前提是我A集群可以连通B集群网络,实现
    Prometheus跨集群采集采集

    关于A集群

    A集群 以及prometheus搭建 参照:Kubernetes 1.20.5 安装Prometheus-Oprator

    B集群

    B集群操作参照:阳明大佬 Prometheus 监控外部 Kubernetes 集群

    创建RBAC对象:

    cat rbac.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: prometheus
      namespace: monitoring
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: prometheus
    rules:
    - apiGroups:
      - ""
      resources:
      - nodes
      - services
      - endpoints
      - pods
      - nodes/proxy
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - "extensions"
      resources:
        - ingresses
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - configmaps
      - nodes/metrics
      verbs:
      - get
    - nonResourceURLs:
      - /metrics
      verbs:
      - get
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: prometheus
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: prometheus
    subjects:
    - kind: ServiceAccount
      name: prometheus
      namespace: monitoring
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    kuberctl apply -f rbac.yaml
    
    • 1
    [root@sh-master-01 prometheus]# kubectl get sa -n monitoring
    [root@sh-master-01 prometheus]# kubectl get secret -n monitoring
    
    
    • 1
    • 2
    • 3

    image.png

    特别强调sa secret token

    why怎么没有见secret?参照:https://itnext.io/big-change-in-k8s-1-24-about-serviceaccounts-and-their-secrets-4b909a4af4e0 恩 1.24发生了改变。我这里的版本是1.25.so :

    [root@sh-master-01 manifests]# kubectl create token prometheus -n monitoring --duration=999999h
    
    
    • 1
    • 2

    image.png
    image.png

    网上很多yq方式用的?

    0 kubectl get  secret prometheus -n monitoring -o yaml|yq r - data.token|base64 -D
    
    • 1

    可是yq安装上了还是不太会玩?怎么办?还是用奔方法吧!:

    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: pod1
      name: pod1
    spec:
      terminationGracePeriodSeconds: 0
      serviceAccount: prometheus
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: pod1
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    kubectl apply -f pod.yaml -n monitoring
    
    • 1
    kubectl exec -it pod1 -n monitoring -- cat /run/secrets/kubernetes.io/serviceaccount/token
    
    
    
    • 1
    • 2
    • 3

    image.png
    注意没有获取两次的token还会不一样的这个稍后研究.恩不知道是不是可以kubectl create token prometheus --duration=999999h 这样?(还是参照https://itnext.io/big-change-in-k8s-1-24-about-serviceaccounts-and-their-secrets-4b909a4af4e0)

    prometheus集群中重新生成additional-configs

    A集群 promethus配置文件夹中:修改prometheus-additional.yaml,复制token 替换bearer_token 中XXXXXXXXXXXXXXXXXXX
    注:B集群apiserver 地址为10.0.2.28:6443,自己整要修改!

    - job_name: 'kubernetes-apiservers-other-cluster'
      kubernetes_sd_configs:
        - role: endpoints
          api_server: https://10.0.2.28:6443
          tls_config:
            insecure_skip_verify: true
          bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      tls_config:
        insecure_skip_verify: true
      bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      scheme: https
      relabel_configs:
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
          action: keep
          regex: default;kubernetes;https
        - target_label: __address__
          replacement: 10.0.2.28:6443
    - job_name: 'kubernetes-nodes-other-cluster'
      kubernetes_sd_configs:
        - role: node
          api_server: https://10.0.2.28:6443
          tls_config:
            insecure_skip_verify: true
          bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      tls_config:
        insecure_skip_verify: true
      bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      scheme: https
      relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: 10.0.2.28:6443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics
    - job_name: 'kubernetes-nodes-cadvisor-other-cluster'
      kubernetes_sd_configs:
        - role: node
          api_server: https://10.0.2.28:6443
          tls_config:
            insecure_skip_verify: true
          bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      tls_config:
        insecure_skip_verify: true
      bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      scheme: https
      relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: 10.0.2.28:6443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    - job_name: 'kubernetes-state-metrics-other-cluster'
      kubernetes_sd_configs:
        - role: endpoints
          api_server: https://10.0.2.28:6443
          tls_config:
            insecure_skip_verify: true
          bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      tls_config:
        insecure_skip_verify: true
      bearer_token: 'XXXXXXXXXXXXXXXXXXX'
      scheme: https
      relabel_configs:
        - source_labels: [__meta_kubernetes_service_name]
          action: keep
          regex: '^(kube-state-metrics)$'
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__address__]
          action: replace
          target_label: instance
        - target_label: __address__
          replacement: 10.0.2.28:6443
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_name, __meta_kubernetes_pod_container_port_number]
          regex: ([^;]+);([^;]+);([^;]+)
          target_label: __metrics_path__
          replacement: /api/v1/namespaces/${1}/pods/http:${2}:${3}/proxy/metrics
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: service_name
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    kubectl delete secret additional-configs -n monitoring
    kubectl create secret generic additional-configs --from-file=prometheus-additional.yaml -n monitoring
    
    • 1
    • 2

    image.png

    image.png
    image.png
    image.png
    grafana看一下,图还是有点问题忽略了哈哈哈。不想去研究是版本问题或者其他了,目前这样就算是实现这些吧!

    总结一下:

    现实环境中我应该不会那么玩,还是跟原来一样,每个k8s集群搞一个prometheus-oprator集群,然后可以连接一个grafana…
    其实那么的搞了一圈玩一下就发现了K8s1.24 后BIG change in K8s 1.24 about ServiceAccounts and their Secrets

  • 相关阅读:
    前端项目开发流程
    static关键字超详细总结
    【40. 石子合并(区间DP)】
    C#泛型
    Java模糊查询批量删除Redis的Key实现
    Vue中关于this指向的问题
    量子计算深化:大规模量子计算(相关论文108篇推荐)
    shell脚本受限执行
    车载通信架构 —— 传统车内通信网络发展回顾
    .skip() 和 .only() 的使用
  • 原文地址:https://blog.csdn.net/saynaihe/article/details/126768296