Increase Storage for Persistent Volumes
The Ops Manager, MongoDB Database, AppDB and Backup Daemon custom resources that
comprise a standard Kubernetes Operator deployment are each deployed as Kubernetes
statefulSets
. The Kubernetes Operator supports increasing the storage associated with
these specific resources by increasing the capacity of their respective Kubernetes
persistentVolumeClaims
when the underlying Kubernetes storageClass
supports Kubernetes
persistentVolume
expansion.
Depending on the specific resource type, you can increase storage in one of two ways. You can either manually increase storage, or you can leverage the Kubernetes Operator easy storage expansion feature. The following table illustrates which of these two procedures is supported for a given custom resource type.
Custom Resource Type | Manual Storage Expansion | Easy Storage Expansion |
---|---|---|
AppDB | ||
Backup Daemon | ||
MongoDB Database | ||
MongoDB Multi-Cluster | ||
Ops Manager |
Prerequisites
Storage Class Must Support Resizing
Make sure the StorageClass and volume plugin provider that the Persistent Volumes use supports resize:
kubectl patch storageclass/<my-storageclass> --type='json' \ -p='[{"op": "add", "path": "/allowVolumeExpansion", "value": true }]'
If you don't have a storageClass
that supports resizing, ask your Kubernetes
administrator to help.
Easy Expand Storage
Note
The easy expansion mechanism requires the default RBAC included with the Kubernetes Operator.
Specifically, it requires get
, list
, watch
, patch
and update
permissions for persistantVolumeClaims
. If you have customized any of
the Kubernetes Operator RBAC resources, you might need to adjust permissions to
allow the Kubernetes Operator to resize storage resources in your Kubernetes cluster.
This process results in a rolling restart of the MongoDB custom resource in your Kubernetes cluster.
Create or identify a persistent custom resource.
Use an existing database resource or create a new one with persistent
storage. Wait until the persistent volume enters the Running
state.
Example
A database resource with persistent storage would include:
1 apiVersion: mongodb.com/v1 2 kind: MongoDB 3 metadata: 4 name: <my-replica-set> 5 spec: 6 members: 3 7 version: "4.4.0" 8 project: my-project 9 credentials: my-credentials 10 type: ReplicaSet 11 podSpec: 12 persistence: 13 single: 14 storage: "1Gi"
Insert data into the database that the resource serves.
Start
mongo
in the Kubernetes cluster.kubectl exec -it <my-replica-set>-0 \ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo Insert data into the
test
database.<my-replica-set>:PRIMARY> use test switched to db test <my-replica-set>:PRIMARY> db.tmp.insertOne({"foo":"bar"}) { "acknowledged" : true, "insertedId" : ObjectId("61128cb4a783c3c57ae5142d") }
Update the database resource with a new storage value.
Important
You can only increase disk size for existing storage resources, not decrease. Decreasing the storage size causes an error in the reconcile stage.
Update the disk size. Open your preferred text editor and make changes similar to this example:
Example
To update the disk size of the replica set to 2 GB, change the
storage
value in the database resource specification:1 apiVersion: mongodb.com/v1 2 kind: MongoDB 3 metadata: 4 name: <my-replica-set> 5 spec: 6 members: 3 7 version: "4.4.0" 8 project: my-project 9 credentials: my-credentials 10 type: ReplicaSet 11 podSpec: 12 persistence: 13 single: 14 storage: "2Gi" Update the MongoDB custom resource with the new volume size.
kubectl apply -f my-updated-replica-set-vol.yaml Wait until this StatefulSet achieves the
Running
state.
Validate that the Persistent Volume Claim has been resized.
If you reuse Persistent Volumes, you can find the data that you inserted in Step 2 on the databases stored in Persistent Volumes:
kubectl describe mongodb/<my-replica-set> -n mongodb
The following output indicates that your PVC resize request is being processed.
status: clusterStatusList: {} lastTransition: "2024-08-21T11:03:52+02:00" message: StatefulSet not ready observedGeneration: 2 phase: Pending pvc: - phase: PVC Resize - STS has been orphaned statefulsetName: multi-replica-set-pvc-resize-0 resourcesNotReady: - kind: StatefulSet message: 'Not all the Pods are ready (wanted: 2, updated: 1, ready: 1, current:2)' name: multi-replica-set-pvc-resize-0 version: ""
Validate data exists on the updated Persistent Volume Claim.
If you reuse Persistent Volumes, you can find the data that you inserted in Step 2 on the databases stored in Persistent Volumes:
kubectl exec -it <my-replica-set>-1 \ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo
<my-replica-set>:PRIMARY> use test switched to db test <my-replica-set>:PRIMARY> db.tmp.count() 1
Manually Expand Storage
Create or identify a persistent custom resource.
Use an existing database resource or create a new one with persistent
storage. Wait until the persistent volume gets to the Running
state.
Example
A database resource with persistent storage would include:
1 apiVersion: mongodb.com/v1 2 kind: MongoDB 3 metadata: 4 name: <my-replica-set> 5 spec: 6 members: 3 7 version: "4.4.0" 8 project: my-project 9 credentials: my-credentials 10 type: ReplicaSet 11 podSpec: 12 persistence: 13 single: 14 storage: "1Gi"
Insert data to the database that the resource serves.
Start
mongo
in the Kubernetes cluster.kubectl exec -it <my-replica-set>-0 \ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo Insert data into the
test
database.<my-replica-set>:PRIMARY> use test switched to db test <my-replica-set>:PRIMARY> db.tmp.insertOne({"foo":"bar"}) { "acknowledged" : true, "insertedId" : ObjectId("61128cb4a783c3c57ae5142d") }
Patch each persistence volume.
Invoke the following commands for the entire replica set:
kubectl patch pvc/"data-<my-replica-set>-0" -p='{"spec": {"resources": {"requests": {"storage": "2Gi"}}}}' kubectl patch pvc/"data-<my-replica-set>-1" -p='{"spec": {"resources": {"requests": {"storage": "2Gi"}}}}' kubectl patch pvc/"data-<my-replica-set>-2" -p='{"spec": {"resources": {"requests": {"storage": "2Gi"}}}}'
Wait until each Persistent Volume Claim gets to the following condition:
- lastProbeTime: null lastTransitionTime: "2019-08-01T12:11:39Z" message: Waiting for user to (re-)start a pod to finish file system resize of volume on node. status: "True" type: FileSystemResizePending
Stop the Operator.
Update the Kubernetes Operator deployment definition and apply the change to your
Kubernetes cluster in order to scale the Kubernetes Operator down to 0
replicas.
Scaling the Kubernetes Operator down to 0
replicas allows you to avoid a race
condition in which the Kubernetes Operator tries to restore the state of the
manually updated resource to align with the resource's original definition.
# Source: enterprise-operator/templates/operator.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mongodb-enterprise-operator namespace: mongodb spec: replicas: 0
Remove the StatefulSets.
Note
This step removes the StatefulSet only. The pods remain unchanged and running.
Delete a StatefulSet resource.
kubectl delete sts --cascade=false <my-replica-set>
Update the database resource with a new storage value.
Update the disk size. Open your preferred text editor and make changes similar to this example:
Example
To update the disk size of the replica set to 2 GB, change the
storage
value in the database resource specification:1 apiVersion: mongodb.com/v1 2 kind: MongoDB 3 metadata: 4 name: <my-replica-set> 5 spec: 6 members: 3 7 version: "4.4.0" 8 project: my-project 9 credentials: my-credentials 10 type: ReplicaSet 11 podSpec: 12 persistence: 13 single: 14 storage: "2Gi" Recreate a StatefulSet resource with the new volume size.
kubectl apply -f my-replica-set-vol.yaml Wait until the MongoDB custom resource is in a
Running
state.
Validate data exists on the updated Persistent Volume Claim.
If the Persistent Volumes were reused, the data that you inserted in Step 2 can be found on the databases stored in Persistent Volumes:
kubectl exec -it <my-replica-set>-1 \ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.4.0/bin/mongo
<my-replica-set>:PRIMARY> use test switched to db test <my-replica-set>:PRIMARY> db.tmp.count() 1