背景:使用dockerhub官方的mongodb 3.6部署了3副本的workload,但是每次重启pod,都会发现原本该pod写入持久卷的数据丢失,经过排查,找到了问题所在。
用户使用如下yaml文件创建了workload
- apiVersion: apps/v1
- kind: StatefulSet
- metadata:
- name: mongo
- namespace: default
- spec:
- podManagementPolicy: OrderedReady
- replicas: 3
- revisionHistoryLimit: 10
- selector:
- matchLabels:
- k8s-app: mongo
- serviceName: mongo
- template:
- metadata:
- labels:
- k8s-app: mongo
- spec:
- containers:
- - command:
- - mongod
- - --bind_ip
- - 0.0.0.0
- - --replSet
- - config
- - --configsvr
- image: mongo:3.6
- imagePullPolicy: IfNotPresent
- name: mongo
- resources: {}
- securityContext:
- privileged: false
- terminationMessagePath: /dev/termination-log
- terminationMessagePolicy: File
- volumeMounts:
- - mountPath: /data
- name: mongo-pvc
- dnsPolicy: ClusterFirst
- restartPolicy: Always
- schedulerName: default-scheduler
- securityContext: {}
- terminationGracePeriodSeconds: 30
- updateStrategy:
- rollingUpdate:
- partition: 0
- type: RollingUpdate
- volumeClaimTemplates:
- - apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: mongo-pvc
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 10Gi
- storageClassName: cbs
- volumeMode: Filesystem
因根据dockerhub页面所描述,默认存放db数据的路径为/data/db:

故将数据卷挂载至pod内的/data目录看似并无问题,创建后也正常启动,并写入数据,一切看似都再正常不过。

但是当pod发生重启后,pod内的数据就会全部丢失。
通过findmnt命令查询,发现/data/db和/data/configdb并未出现在所挂载pvc对应的/data下,而是被挂载至了/dev/vda1

这就是问题的关键了,那么是什么原因造成此问题的呢?
通过docker history --no-trunc mongo:3.6查看镜像的构建历史发现,此dockerfile在构建时有使用VOLUME命令,手工挂载了/data/db和/data/configdb

dockerfile构建后的镜像中,VOLUME中的操作并不会被kubernetes忽略,而是会继续挂载,如需要将其覆盖,必须要手工指定pvc的挂载点同名,将其覆盖,类似这样
- spec:
- template:
- spec:
- volumeMounts:
- - mountPath: /data/db
- name: mongo-pvc
- volumeClaimTemplates:
- - apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- creationTimestamp: null
- name: mongo-pvc
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 10Gi
- storageClassName: cbs
- volumeMode: Filesystem
调整挂载点为/data/db和/data/configdb后,再次测试,数据丢失的问题已经解决。