• 【无标题】


    单节点


    这里我们采集 node-exporter 为例进行说明,首先使用 Prometheus 采集数据,然后将 Prometheus 数据远程写入 VM 远程存储,由于 VM 提供了 vmagent 组件,最后我们使用 VM 来完全替换 Prometheus,可以使架构更简单、更低的资源占用。

    这里我们将所有资源运行在 kube-vm 命名空间之下:

    ☸ ➜ kubectl create ns kube-vm
    

    首先我们这 kube-vm 命名空间下面使用 DaemonSet 控制器运行 node-exporter,对应的资源清单文件如下所示:

    1. # vm-node-exporter.yaml
    2. apiVersion: apps/v1
    3. kind: DaemonSet
    4. metadata:
    5. name: node-exporter
    6. namespace: kube-vm
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: node-exporter
    11. template:
    12. metadata:
    13. labels:
    14. app: node-exporter
    15. spec:
    16. hostPID: true
    17. hostIPC: true
    18. hostNetwork: true
    19. nodeSelector:
    20. kubernetes.io/os: linux
    21. containers:
    22. - name: node-exporter
    23. image: prom/node-exporter:v1.3.1
    24. args:
    25. - --web.listen-address=$(HOSTIP):9111
    26. - --path.procfs=/host/proc
    27. - --path.sysfs=/host/sys
    28. - --path.rootfs=/host/root
    29. - --no-collector.hwmon # 禁用不需要的一些采集器
    30. - --no-collector.nfs
    31. - --no-collector.nfsd
    32. - --no-collector.nvme
    33. - --no-collector.dmi
    34. - --no-collector.arp
    35. - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/containerd/.+|/var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
    36. - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
    37. ports:
    38. - containerPort: 9111
    39. env:
    40. - name: HOSTIP
    41. valueFrom:
    42. fieldRef:
    43. fieldPath: status.hostIP
    44. resources:
    45. requests:
    46. cpu: 150m
    47. memory: 180Mi
    48. limits:
    49. cpu: 150m
    50. memory: 180Mi
    51. securityContext:
    52. runAsNonRoot: true
    53. runAsUser: 65534
    54. volumeMounts:
    55. - name: proc
    56. mountPath: /host/proc
    57. - name: sys
    58. mountPath: /host/sys
    59. - name: root
    60. mountPath: /host/root
    61. mountPropagation: HostToContainer
    62. readOnly: true
    63. tolerations: # 添加容忍
    64. - operator: "Exists"
    65. volumes:
    66. - name: proc
    67. hostPath:
    68. path: /proc
    69. - name: dev
    70. hostPath:
    71. path: /dev
    72. - name: sys
    73. hostPath:
    74. path: /sys
    75. - name: root
    76. hostPath:
    77. path: /

    由于前面章节中我们也创建了 node-exporter,为了防止端口冲突,这里我们使用参数 --web.listen-address=$(HOSTIP):9111 配置端口为 9111。直接应用上面的资源清单即可。

    1. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-node-exporter.yaml
    2. ☸ ➜ kubectl get pods -n kube-vm -owide
    3. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    4. node-exporter-c4d76 1/1 Running 0 118s 192.168.0.109 node2 <none> <none>
    5. node-exporter-hzt8s 1/1 Running 0 118s 192.168.0.111 master1 <none> <none>
    6. node-exporter-zlxwb 1/1 Running 0 118s 192.168.0.110 node1 <none> <none>

    然后重新部署一套独立的 Prometheus,为了简单我们直接使用 static_configs 静态配置方式来抓取 node-exporter 的指标,配置清单如下所示:

    1. # vm-prom-config.yaml
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: prometheus-config
    6. namespace: kube-vm
    7. data:
    8. prometheus.yaml: |
    9. global:
    10. scrape_interval: 15s
    11. scrape_timeout: 15s
    12. scrape_configs:
    13. - job_name: "nodes"
    14. static_configs:
    15. - targets: ['192.168.0.109:9111', '192.168.0.110:9111', '192.168.0.111:9111']
    16. relabel_configs: # 通过 relabeling 从 __address__ 中提取 IP 信息,为了后面验证 VM 是否兼容 relabeling
    17. - source_labels: [__address__]
    18. regex: "(.*):(.*)"
    19. replacement: "${1}"
    20. target_label: 'ip'
    21. action: replace

    上面配置中通过 relabel 操作从 __address__ 中将 IP 信息提取出来,后面可以用来验证 VM 是否兼容 relabel 操作。

    同样要给 Prometheus 数据做持久化,所以也需要创建一个对应的 PVC 资源对象:

    1. # apiVersion: storage.k8s.io/v1
    2. # kind: StorageClass
    3. # metadata:
    4. # name: local-storage
    5. # provisioner: kubernetes.io/no-provisioner
    6. # volumeBindingMode: WaitForFirstConsumer
    7. ---
    8. apiVersion: v1
    9. kind: PersistentVolume
    10. metadata:
    11. name: prometheus-data
    12. spec:
    13. accessModes:
    14. - ReadWriteOnce
    15. capacity:
    16. storage: 20Gi
    17. storageClassName: local-storage
    18. local:
    19. path: /data/k8s/prometheus
    20. persistentVolumeReclaimPolicy: Retain
    21. nodeAffinity:
    22. required:
    23. nodeSelectorTerms:
    24. - matchExpressions:
    25. - key: kubernetes.io/hostname
    26. operator: In
    27. values:
    28. - node2
    29. ---
    30. apiVersion: v1
    31. kind: PersistentVolumeClaim
    32. metadata:
    33. name: prometheus-data
    34. namespace: kube-vm
    35. spec:
    36. accessModes:
    37. - ReadWriteOnce
    38. resources:
    39. requests:
    40. storage: 20Gi
    41. storageClassName: local-storage

    然后直接创建 Prometheus 即可,将上面的 PVC 和 ConfigMap 挂载到容器中,通过 --config.file 参数指定配置文件文件路径,指定 TSDB 数据路径等,资源清单文件如下所示:

    1. # vm-prom-deploy.yaml
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: prometheus
    6. namespace: kube-vm
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: prometheus
    11. template:
    12. metadata:
    13. labels:
    14. app: prometheus
    15. spec:
    16. volumes:
    17. - name: data
    18. persistentVolumeClaim:
    19. claimName: prometheus-data
    20. - name: config-volume
    21. configMap:
    22. name: prometheus-config
    23. containers:
    24. - image: prom/prometheus:v2.35.0
    25. name: prometheus
    26. args:
    27. - "--config.file=/etc/prometheus/prometheus.yaml"
    28. - "--storage.tsdb.path=/prometheus" # 指定tsdb数据路径
    29. - "--storage.tsdb.retention.time=2d"
    30. - "--web.enable-lifecycle" # 支持热更新,直接执行localhost:9090/-/reload立即生效
    31. ports:
    32. - containerPort: 9090
    33. name: http
    34. securityContext:
    35. runAsUser: 0
    36. volumeMounts:
    37. - mountPath: "/etc/prometheus"
    38. name: config-volume
    39. - mountPath: "/prometheus"
    40. name: data
    41. ---
    42. apiVersion: v1
    43. kind: Service
    44. metadata:
    45. name: prometheus
    46. namespace: kube-vm
    47. spec:
    48. selector:
    49. app: prometheus
    50. type: NodePort
    51. ports:
    52. - name: web
    53. port: 9090
    54. targetPort: http

    直接应用上面的资源清单即可。

    1. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-prom-config.yaml
    2. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-prom-pvc.yaml
    3. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-prom-deploy.yaml
    4. ☸ ➜ kubectl get pods -n kube-vm -owide
    5. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    6. node-exporter-c4d76 1/1 Running 0 27m 192.168.0.109 node2 <none> <none>
    7. node-exporter-hzt8s 1/1 Running 0 27m 192.168.0.111 master1 <none> <none>
    8. node-exporter-zlxwb 1/1 Running 0 27m 192.168.0.110 node1 <none> <none>
    9. prometheus-dfc9f6-2w2vf 1/1 Running 0 4m58s 10.244.2.102 node2 <none> <none>
    10. ☸ ➜ kubectl get svc -n kube-vm
    11. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    12. prometheus NodePort 10.103.38.114 <none> 9090:31890/TCP 4m10s

    部署完成后可以通过 http://<node-ip>:31890 访问 Prometheus,正常可以看到采集的 3 个 node 节点的指标任务。

    同样的方式重新部署 Grafana,资源清单如下所示:

    1. # vm-grafana.yaml
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: grafana
    6. namespace: kube-vm
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: grafana
    11. template:
    12. metadata:
    13. labels:
    14. app: grafana
    15. spec:
    16. volumes:
    17. - name: storage
    18. persistentVolumeClaim:
    19. claimName: grafana-data
    20. containers:
    21. - name: grafana
    22. image: grafana/grafana:main
    23. imagePullPolicy: IfNotPresent
    24. ports:
    25. - containerPort: 3000
    26. name: grafana
    27. securityContext:
    28. runAsUser: 0
    29. env:
    30. - name: GF_SECURITY_ADMIN_USER
    31. value: admin
    32. - name: GF_SECURITY_ADMIN_PASSWORD
    33. value: admin321
    34. volumeMounts:
    35. - mountPath: /var/lib/grafana
    36. name: storage
    37. ---
    38. apiVersion: v1
    39. kind: Service
    40. metadata:
    41. name: grafana
    42. namespace: kube-vm
    43. spec:
    44. type: NodePort
    45. ports:
    46. - port: 3000
    47. selector:
    48. app: grafana
    49. ---
    50. apiVersion: v1
    51. kind: PersistentVolume
    52. metadata:
    53. name: grafana-data
    54. spec:
    55. accessModes:
    56. - ReadWriteOnce
    57. capacity:
    58. storage: 1Gi
    59. storageClassName: local-storage
    60. local:
    61. path: /data/k8s/grafana
    62. persistentVolumeReclaimPolicy: Retain
    63. nodeAffinity:
    64. required:
    65. nodeSelectorTerms:
    66. - matchExpressions:
    67. - key: kubernetes.io/hostname
    68. operator: In
    69. values:
    70. - node2
    71. ---
    72. apiVersion: v1
    73. kind: PersistentVolumeClaim
    74. metadata:
    75. name: grafana-data
    76. namespace: kube-vm
    77. spec:
    78. accessModes:
    79. - ReadWriteOnce
    80. resources:
    81. requests:
    82. storage: 1Gi
    83. storageClassName: local-storage
    1. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-grafana.yaml
    2. ☸ ➜ kubectl get svc -n kube-vm |grep grafana
    3. grafana NodePort 10.97.111.153 <none> 3000:31800/TCP 62s

    同样通过 http://<node-ip>:31800 就可以访问 Grafana 了,进入 Grafana 配置 Prometheus 数据源。

    然后导入 16098 这个 Dashboard,导入后效果如下图所示。

    到这里就完成了使用 Prometheus 收集节点监控指标,接下来我们来使用 VM 来改造现有方案。

     

     

     

    远程存储 VictoriaMetrics


    首先需要一个单节点模式的 VM,运行 VM 很简单,可以直接下载对应的二进制文件启动,也可以使用 docker 镜像一键启动,我们这里同样部署到 Kubernetes 集群中。资源清单文件如下所示。

    1. # vm-grafana.yaml
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: victoria-metrics
    6. namespace: kube-vm
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: victoria-metrics
    11. template:
    12. metadata:
    13. labels:
    14. app: victoria-metrics
    15. spec:
    16. volumes:
    17. - name: storage
    18. persistentVolumeClaim:
    19. claimName: victoria-metrics-data
    20. containers:
    21. - name: vm
    22. image: victoriametrics/victoria-metrics:v1.76.1
    23. imagePullPolicy: IfNotPresent
    24. args:
    25. - -storageDataPath=/var/lib/victoria-metrics-data
    26. - -retentionPeriod=1w
    27. ports:
    28. - containerPort: 8428
    29. name: http
    30. volumeMounts:
    31. - mountPath: /var/lib/victoria-metrics-data
    32. name: storage
    33. ---
    34. apiVersion: v1
    35. kind: Service
    36. metadata:
    37. name: victoria-metrics
    38. namespace: kube-vm
    39. spec:
    40. type: NodePort
    41. ports:
    42. - port: 8428
    43. selector:
    44. app: victoria-metrics
    45. ---
    46. apiVersion: v1
    47. kind: PersistentVolume
    48. metadata:
    49. name: victoria-metrics-data
    50. spec:
    51. accessModes:
    52. - ReadWriteOnce
    53. capacity:
    54. storage: 20Gi
    55. storageClassName: local-storage
    56. local:
    57. path: /data/k8s/vm
    58. persistentVolumeReclaimPolicy: Retain
    59. nodeAffinity:
    60. required:
    61. nodeSelectorTerms:
    62. - matchExpressions:
    63. - key: kubernetes.io/hostname
    64. operator: In
    65. values:
    66. - node2
    67. ---
    68. apiVersion: v1
    69. kind: PersistentVolumeClaim
    70. metadata:
    71. name: victoria-metrics-data
    72. namespace: kube-vm
    73. spec:
    74. accessModes:
    75. - ReadWriteOnce
    76. resources:
    77. requests:
    78. storage: 20Gi
    79. storageClassName: local-storage

    这里我们使用 -storageDataPath 参数指定了数据存储目录,然后同样将该目录进行了持久化,-retentionPeriod 参数可以用来配置数据的保持周期,上面是保存1个礼拜,如果不配置默认是1个月。直接应用上面的资源清单即可。

    1. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-single-node-deploy.yaml
    2. ☸ ➜ kubectl get svc victoria-metrics -n kube-vm
    3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    4. victoria-metrics NodePort 10.106.216.248 <none> 8428:31953/TCP 75m
    5. ☸ ➜ kubectl get pods -n kube-vm -l app=victoria-metrics
    6. NAME READY STATUS RESTARTS AGE
    7. victoria-metrics-57d47f4587-htb88 1/1 Running 0 3m12s
    8. ☸ ➜ kubectl logs -f victoria-metrics-57d47f4587-htb88 -n kube-vm
    9. 2022-04-22T08:59:14.431Z info VictoriaMetrics/lib/logger/flag.go:12 build version: victoria-metrics-20220412-134346-tags-v1.76.1-0-gf8de318bf
    10. 2022-04-22T08:59:14.431Z info VictoriaMetrics/lib/logger/flag.go:13 command line flags
    11. 2022-04-22T08:59:14.431Z info VictoriaMetrics/lib/logger/flag.go:20 flag "retentionPeriod"="1w"
    12. 2022-04-22T08:59:14.431Z info VictoriaMetrics/lib/logger/flag.go:20 flag "storageDataPath"="/var/lib/victoria-metrics-data"
    13. 2022-04-22T08:59:14.431Z info VictoriaMetrics/app/victoria-metrics/main.go:52 starting VictoriaMetrics at ":8428"...
    14. 2022-04-22T08:59:14.432Z info VictoriaMetrics/app/vmstorage/main.go:97 opening storage at "/var/lib/victoria-metrics-data" with -retentionPeriod=1w
    15. ......
    16. 2022-04-22T08:59:14.449Z info VictoriaMetrics/app/victoria-metrics/main.go:61 started VictoriaMetrics in 0.017 seconds
    17. 2022-04-22T08:59:14.449Z info VictoriaMetrics/lib/httpserver/httpserver.go:91 starting http server at http://127.0.0.1:8428/

    到这里我们单节点的 VictoriaMetrics 就部署成功了。接下来我们只需要在 Prometheus 中配置远程写入我们的 VM 即可,更改 Prometheus 配置:(如果有多个Prometheus,可以统一的写入到一个vm里面就可以了,将Prometheus做分片,每个分片里面配置remote_write,这样压力就不在Prometheus上面了,就在vm上面了)

    1. # vm-prom-config2.yaml
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: prometheus-config
    6. namespace: kube-vm
    7. data:
    8. prometheus.yaml: |
    9. global:
    10. scrape_interval: 15s
    11. scrape_timeout: 15s
    12. remote_write: # 远程写入到远程 VM 存储
    13. - url: http://victoria-metrics:8428/api/v1/write
    14. scrape_configs:
    15. - job_name: "nodes"
    16. static_configs:
    17. - targets: ['192.168.0.109:9111', '192.168.0.110:9111', '192.168.0.111:9111']
    18. relabel_configs: # 通过 relabeling 从 __address__ 中提取 IP 信息,为了后面验证 VM 是否兼容 relabeling
    19. - source_labels: [__address__]
    20. regex: "(.*):(.*)"
    21. replacement: "${1}"
    22. target_label: 'ip'
    23. action: replace

    重新更新 Prometheus 的配置资源对象:

    1. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-prom-config2.yaml
    2. # 更新后执行 reload 操作重新加载 prometheus 配置
    3. ☸ ➜ curl -X POST "http://192.168.0.111:31890/-/reload"

    配置生效后 Prometheus 就会开始将数据远程写入 VM 中,我们可以查看 VM 的持久化数据目录是否有数据产生来验证:

    1. ☸ ➜ ll /data/k8s/vm/data/
    2. total 0
    3. drwxr-xr-x 4 root root 38 Apr 22 17:15 big
    4. -rw-r--r-- 1 root root 0 Apr 22 16:59 flock.lock
    5. drwxr-xr-x 4 root root 38 Apr 22 17:15 small

    现在我们去直接将 Grafana 中的数据源地址修改成 VM 的地址:

    修改完成后重新访问 node-exporter 的 dashboard,正常可以显示,证明 VM 是兼容的。

     

     

     

     

    替换 Prometheus


    上面我们将 Prometheus 数据远程写入到了 VM,但是 Prometheus 开启 remote write 功能后会增加其本身的资源占用,理论上其实我们也可以完全用 VM 来替换掉 Prometheus,这样就不需要远程写入了,而且本身 VM 就比 Prometheus 占用更少的资源。

    现在我们先停掉 Prometheus 的服务:

    ☸ ➜ kubectl scale deploy prometheus --replicas=0 -n kube-vm
    

    然后将 Prometheus 的配置文件挂载到 VM 容器中,使用参数 -promscrape.config 来指定 Prometheus 的配置文件路径,如下所示:

    1. # vm-single-node-deploy2.yaml
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: victoria-metrics
    6. namespace: kube-vm
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: victoria-metrics
    11. template:
    12. metadata:
    13. labels:
    14. app: victoria-metrics
    15. spec:
    16. volumes:
    17. - name: storage
    18. persistentVolumeClaim:
    19. claimName: victoria-metrics-data
    20. - name: prometheus-config
    21. configMap:
    22. name: prometheus-config
    23. containers:
    24. - name: vm
    25. image: victoriametrics/victoria-metrics:v1.76.1
    26. imagePullPolicy: IfNotPresent
    27. args:
    28. - -storageDataPath=/var/lib/victoria-metrics-data
    29. - -retentionPeriod=1w
    30. - -promscrape.config=/etc/prometheus/prometheus.yaml
    31. ports:
    32. - containerPort: 8428
    33. name: http
    34. volumeMounts:
    35. - mountPath: /var/lib/victoria-metrics-data
    36. name: storage
    37. - mountPath: /etc/prometheus
    38. name: prometheus-config

    记得先将 Prometheus 配置文件中的 remote_write 模块去掉,然后重新更新 VM 即可:

    1. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-prom-config.yaml
    2. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-single-node-deploy2.yaml
    3. ☸ ➜ kubectl get pods -n kube-vm -l app=victoria-metrics
    4. NAME READY STATUS RESTARTS AGE
    5. victoria-metrics-8466844968-ncfnp 1/1 Running 2 (3m3s ago) 3m45s
    6. ☸ ➜ kubectl logs -f victoria-metrics-8466844968-ncfnp -n kube-vm
    7. ......
    8. 2022-04-22T10:01:59.837Z info VictoriaMetrics/app/victoria-metrics/main.go:61 started VictoriaMetrics in 0.022 seconds
    9. 2022-04-22T10:01:59.837Z info VictoriaMetrics/lib/httpserver/httpserver.go:91 starting http server at http://127.0.0.1:8428/
    10. 2022-04-22T10:01:59.837Z info VictoriaMetrics/lib/httpserver/httpserver.go:92 pprof handlers are exposed at http://127.0.0.1:8428/debug/pprof/
    11. 2022-04-22T10:01:59.838Z info VictoriaMetrics/lib/promscrape/scraper.go:103 reading Prometheus configs from "/etc/prometheus/prometheus.yaml"
    12. 2022-04-22T10:01:59.838Z info VictoriaMetrics/lib/promscrape/config.go:96 starting service discovery routines...
    13. 2022-04-22T10:01:59.839Z info VictoriaMetrics/lib/promscrape/config.go:102 started service discovery routines in 0.000 seconds
    14. 2022-04-22T10:01:59.840Z info VictoriaMetrics/lib/promscrape/scraper.go:395 static_configs: added targets: 3, removed targets: 0; total targets: 3

    从 VM 日志中可以看出成功读取了 Prometheus 的配置,并抓取了 3 个指标(node-exporter)。 现在我们再去 Grafana 查看 node-exporter 的 Dashboard 是否可以正常显示。先保证数据源是 VM 的地址。

    这样我们就使用 VM 替换掉了 Prometheus,我们也可以这 Grafana 的 Explore 页面去探索采集到的指标。

     

     

     

    UI 界面


    VM 单节点版本本身自带了一个 Web UI 界面 - vmui,不过目前功能比较简单,可以直接通过 VM 的 NodePort 端口进行访问。

    1. ☸ ➜ kubectl get svc victoria-metrics -n kube-vm
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. victoria-metrics NodePort 10.106.216.248 <none> 8428:31953/TCP 75m

    我们这里可以通过 http://<node-ip>:31953 访问到 vmui:

    可以通过 /vmui 这个 endpoint 访问 UI 界面:

    如果你想查看采集到的指标 targets,那么可以通过 /targets 这个 endpoint 来获取:

    这些功能基本上可以满足我们的一些需求,但是还是太过简单,如果你习惯了 Prometheus 的 UI 界面,那么我们可以使用 promxy 来代替 vmui,而且 promxy 还可以进行多个 VM 单节点的数据聚合,以及 targets 查看等,对应的资源清单文件如下所示:

    1. # vm-promxy.yaml
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: promxy-config
    6. namespace: kube-vm
    7. data:
    8. config.yaml: |
    9. promxy:
    10. server_groups:
    11. - static_configs:
    12. - targets: [victoria-metrics:8428] # 指定vm地址,有多个则往后追加即可
    13. path_prefix: /prometheus # 配置前缀
    14. ---
    15. apiVersion: apps/v1
    16. kind: Deployment
    17. metadata:
    18. name: promxy
    19. namespace: kube-vm
    20. spec:
    21. selector:
    22. matchLabels:
    23. app: promxy
    24. template:
    25. metadata:
    26. labels:
    27. app: promxy
    28. spec:
    29. containers:
    30. - args:
    31. - "--config=/etc/promxy/config.yaml"
    32. - "--web.enable-lifecycle"
    33. - "--log-level=trace"
    34. env:
    35. - name: ROLE
    36. value: "1"
    37. command:
    38. - "/bin/promxy"
    39. image: quay.io/jacksontj/promxy
    40. imagePullPolicy: Always
    41. name: promxy
    42. ports:
    43. - containerPort: 8082
    44. name: web
    45. volumeMounts:
    46. - mountPath: "/etc/promxy/"
    47. name: promxy-config
    48. readOnly: true
    49. - args: # container to reload configs on configmap change
    50. - "--volume-dir=/etc/promxy"
    51. - "--webhook-url=http://localhost:8082/-/reload"
    52. image: jimmidyson/configmap-reload:v0.1
    53. name: promxy-server-configmap-reload
    54. volumeMounts:
    55. - mountPath: "/etc/promxy/"
    56. name: promxy-config
    57. readOnly: true
    58. volumes:
    59. - configMap:
    60. name: promxy-config
    61. name: promxy-config
    62. ---
    63. apiVersion: v1
    64. kind: Service
    65. metadata:
    66. name: promxy
    67. namespace: kube-vm
    68. spec:
    69. type: NodePort
    70. ports:
    71. - port: 8082
    72. selector:
    73. app: promxy

    直接应用上面的资源对象即可:

    1. ☸ ➜ kubectl apply -f https://p8s.io/docs/victoriametrics/manifests/vm-promxy.yaml
    2. ☸ ➜ kubectl get pods -n kube-vm -l app=promxy
    3. NAME READY STATUS RESTARTS AGE
    4. promxy-5f7dfdbc64-l4kjq 2/2 Running 0 6m45s
    5. ☸ ➜ kubectl get svc promxy -n kube-vm
    6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    7. promxy NodePort 10.110.19.254 <none> 8082:30618/TCP 6m12s

    访问 Promxy 的页面效果和 Prometheus 自带的 Web UI 基本一致的。

    这里面我们简单介绍了单机版的 victoriametrics 的基本使用。

  • 相关阅读:
    SpringCloud Alibaba Gateway实践与原理学习
    【开题报告】基于SpringBoot的膳食营养健康网站的设计与实现
    322.零钱兑换
    什么是“首次NFT发售(INO)”?
    025——日期与时间
    6176. 出现最频繁的偶数元素
    2022“杭电杯”中国大学生算法设计超级联赛(4)
    2022系统分析师论文真题
    隐马尔可夫模型(一)Evaluation
    M-LVDS收发器MS2111可pin对pin兼容SN65MLVD206
  • 原文地址:https://blog.csdn.net/qq_34556414/article/details/125621536