2 / 2
May 2024

I tried to deploy MongoDB deployment with Kubernetes, having three replicas which are accessing the same storage(PVC). Here is the configuration.

apiVersion: v1 kind: PersistentVolume metadata: name: mongo-pv spec: persistentVolumeReclaimPolicy: Retain capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: "/data/mongo" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongo-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: mongo name: mongo spec: replicas: 3 selector: matchLabels: app: mongo template: metadata: labels: app: mongo spec: containers: - image: mongo name: mongo command: ["mongod"] args: ["--dbpath", "/data/db"] env: - name: MONGO_INITDB_ROOT_USERNAME value: root - name: MONGO_INITDB_ROOT_PASSWORD value: pwd volumeMounts: - name: mongo-data-dir mountPath: /data/db volumes: - name: mongo-data-dir persistentVolumeClaim: claimName: mongo-pvc --- apiVersion: v1 kind: Service metadata: labels: app: mongo name: mongo-nodeport-svc spec: ports: - port: 27017 protocol: TCP targetPort: 27017 nodePort: 32000 selector: app: mongo type: NodePort

When you apply this configuration file kubectl apply -f file_name.yaml , you can create three pods, which are accessing the same storage for the databases. When you check the pods’ status kubectl get pods , you can see only one pod becomes running and others be in a CrashLoop state. What is happening is, when creating more than one instance to use a common storage, only one instance can acquire the lock of the mongod.loc (Unable to lock the lock file: /data/db/mongod.loc)

What’s why only one pod becomes healthy and others in CrashLoopBackOff.( you can see this using kubectl logs pod-name). What I want is,

  1. Can I release the lock of the mongod.loc file and use the storage for all the pods?(this mostly can not, because of consistency issues)
  2. If not, how can I create the proposed idea?( accessing a single storage/volume using multiple pod instances)?
  3. Would you suggest other ideas to achieve accessing a single storage using multiple pod instances?

Your answers and help are welcomed.

Hi,

This isn’t something we’d recommend trying to do - it undermines the benefit of having multiple members in the replica set in the first place by having a single point of failure at the storage level if the disk fails.