Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/ /

Considerations

On this page

  • Deploy the Recommended Number of MongoDB Replica Sets
  • Specify CPU and Memory Resource Requirements
  • Co-locate mongos Pods with Your Applications
  • Name Your MongoDB Service with its Purpose
  • Use Labels to Differentiate Between Deployments
  • Customize the CustomResourceDefinitions that the Kubernetes Operator Watches
  • Ensure Proper Persistence Configuration
  • Use Multiple Availability Zones

This page details best practices and system configuration recommendations for the MongoDB Enterprise Kubernetes Operator when running in production.

We recommend that you use a single instance of the Kubernetes Operator to deploy up to 20 replica sets in parallel.

You may increase this number to 50 and expect a reasonable increase in the time that the Kubernetes Operator takes to download, install, deploy, and reconcile its resources.

For 50 replica sets, the time to deploy varies and might take up to 40 minutes. This time depends on the network bandwidth of the Kubernetes cluster and the time it takes each MongoDB Agent to download MongoDB installation binaries from the Internet for each MongoDB cluster member.

To deploy more than 50 MongoDB replica sets in parallel, use multiple instances of the Kubernetes Operator.

Note

The following considerations apply:

  • All sizing and performance recommendations for common MongoDB deployments through the Kubernetes Operator in this section are subject to change. Do not treat these recommendations as guarantees or limitations of any kind.

  • These recommendations reflect performance testing findings and represent our suggestions for production deployments. We ran the tests on a cluster comprised of seven AWS EC2 instances of type t2.2xlarge and a master node of type t2.medium.

  • The recommendations in this section don't discuss characteristics of any specific deployment. Your deployment's characteristics may differ from the assumptions made to create these recommendations. Contact MongoDB Support for further help with sizings.

In Kubernetes, each Pod includes parameters that allow you to specify CPU resources and memory resources for each container in the Pod.

To indicate resource bounds, Kubernetes uses the requests and limits parameters, where:

  • request indicates a lower bound of a resource.

  • limit indicates an upper bound of a resource.

The following sections illustrate how to:

For the Pods hosting Ops Manager, use the default resource limits configurations.

You can run the lightweight mongos instance on the same node as your apps using MongoDB. The Kubernetes Operator supports standard Kubernetes node affinity and anti-affinity features. Using these features, you can force install the mongos on the same node as your application.

The following abbreviated example shows affinity and multiple availability zones configuration.

The podAffinity key determines whether to install an application on the same Pod, node, or data center as another application.

To specify Pod affinity:

  1. Add a label and value in the spec.podSpec.podTemplate.metadata.labels YAML collection to tag the deployment. See spec.podSpec.podTemplate.metadata, and the Kubernetes PodSpec v1 core API.

  2. Specify which label the mongos uses in the spec.mongosPodSpec.podAffinity .requiredDuringSchedulingIgnoredDuringExecution.labelSelector YAML collection. The matchExpressions collection defines the label that the Kubernetes Operator uses to identify the Pod for hosting the mongos.

Example

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-set
5spec:
6 members: 3
7 version: 4.2.1-ent
8 service: my-service
9
10 ...
11 podTemplate:
12 affinity:
13 podAffinity:
14 requiredDuringSchedulingIgnoredDuringExecution:
15 - labelSelector:
16 matchExpressions:
17 - key: security
18 operator: In
19 values:
20 - S1
21 topologyKey: failure-domain.beta.kubernetes.io/zone
22 nodeAffinity:
23 requiredDuringSchedulingIgnoredDuringExecution:
24 nodeSelectorTerms:
25 - matchExpressions:
26 - key: kubernetes.io/e2e-az-name
27 operator: In
28 values:
29 - e2e-az1
30 - e2e-az2
31 podAntiAffinity:
32 requiredDuringSchedulingIgnoredDuringExecution:
33 topologyKey: nodeId

See the full example of multiple availability zones and node affinity configuration in replica-set-affinity.yaml in the Affinity Samples directory.

This directory also contains sample affinity and multiple zones configurations for sharded clusters and standalone MongoDB deployments.

Tip

See also:

Set the spec.service parameter to a value that identifies this deployment's purpose, as illustrated in the following example.

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-set
5spec:
6 members: 3
7 version: "4.4.0-ent"
8 service: drilling-pumps-geosensors
9 featureCompatibilityVersion: "4.0"

Tip

See also:

Use the Pod affinity Kubernetes feature to:

  • Separate different MongoDB resources, such as test, staging, and production environments.

  • Place Pods on some specific nodes to take advantage of features such as SSD support.

1mongosPodSpec:
2 podAffinity:
3 requiredDuringSchedulingIgnoredDuringExecution:
4 - labelSelector:
5 matchExpressions:
6 - key: security
7 operator: In
8 values:
9 - S1
10 topologyKey: failure-domain.beta.kubernetes.io/zone

Tip

See also:

You can specify which custom resources you want the Kubernetes Operator to watch. This allows you to install the CustomResourceDefinition for only the resources that you want the Kubernetes Operator to manage.

You must use helm to configure the Kubernetes Operator to watch only the custom resources you specify. Follow the relevant helm installation instructions, but make the following adjustments:

  1. Decide which CustomResourceDefinitions you want to install. You can install any number of the following:

    Value
    Description
    mongodb
    Install the CustomResourceDefinitions for database resources and watch those resources.
    mongodbusers
    Install the CustomResourceDefinitions for MongoDB user resources and watch those resources.
    opsmanagers
    Install the CustomResourceDefinitions for Ops Manager resources and watch those resources.
  2. Install the Helm Chart and specify which CustomResourceDefinitions you want the Kubernetes Operator to watch.

    Separate each custom resource with a comma:

    helm install <deployment-name> mongodb/enterprise-operator \
    --set operator.watchedResources="{mongodb,mongodbusers}" \
    --skip-crds

The Kubernetes deployments orchestrated by the Kubernetes Operator are stateful. The Kubernetes container uses Persistent Volumes to maintain the cluster state between restarts.

To satisfy the statefulness requirement, the Kubernetes Operator performs the following actions:

  • Creates Persistent Volumes for your MongoDB deployment.

  • Mounts storage devices to one or more directories called mount points.

  • Creates one persistent volume for each MongoDB mount point.

  • Sets the default path in each Kubernetes container to /data.

To meet your MongoDB cluster's storage needs, make the following changes in your configuration for each replica set deployed with the Kubernetes Operator:

  • Verify that persistent volumes are enabled in spec.persistent. This setting defaults to true.

  • Specify a sufficient amount of storage for the Kubernetes Operator to allocate for each of the volumes. The volumes store the data and the logs.

The following abbreviated example shows recommended persistent storage sizes.

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-cluster
5spec:
6
7 ...
8 persistent: true
9
10
11 shardPodSpec:
12 ...
13 persistence:
14 multiple:
15 data:
16 storage: "20Gi"
17 logs:
18 storage: "4Gi"
19 storageClass: standard

For a full example of persistent volumes configuration, see replica-set-persistent-volumes.yaml in the Persistent Volumes Samples directory. This directory also contains sample persistent volumes configurations for sharded clusters and standalone deployments.

When you deploy replica sets with the Kubernetes Operator, CPU usage for Pod used to host the Kubernetes Operator is initially high during the reconciliation process, however, by the time the deployment completes, it lowers.

For production deployments, to satisfy deploying up to 50 MongoDB replica sets or sharded clusters in parallel with the Kubernetes Operator, set the CPU and memory resources and limits for the Kubernetes Operator Pod as follows:

  • spec.template.spec.containers.resources.requests.cpu to 500m

  • spec.template.spec.containers.resources.limits.cpu to 1100m

  • spec.template.spec.containers.resources.requests.memory to 200Mi

  • spec.template.spec.containers.resources.limits.memory to 1Gi

If you don't include the unit of measurement for CPUs, Kubernetes interprets it as the number of cores. If you specify m, such as 500m, Kubernetes interprets it as millis. To learn more, see Meaning of CPU.

The following abbreviated example shows the configuration with recommended CPU and memory bounds for the Kubernetes Operator Pod in your deployment of 50 replica sets or sharded clusters. If you are deploying fewer than 50 MongoDB clusters, you may use lower numbers in the configuration file for the Kubernetes Operator Pod.

Note

Monitoring tools report the size of the node rather than the actual size of the container.

Example

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: mongodb-enterprise-operator
5 namespace: mongodb
6spec:
7 replicas: 1
8 selector:
9 matchLabels:
10 app.kubernetes.io/component: controller
11 app.kubernetes.io/name: mongodb-enterprise-operator
12 app.kubernetes.io/instance: mongodb-enterprise-operator
13 template:
14 metadata:
15 labels:
16 app.kubernetes.io/component: controller
17 app.kubernetes.io/name: mongodb-enterprise-operator
18 app.kubernetes.io/instance: mongodb-enterprise-operator
19 spec:
20 serviceAccountName: mongodb-enterprise-operator
21 securityContext:
22 runAsNonRoot: true
23 runAsUser: 2000
24 containers:
25 - name: mongodb-enterprise-operator
26 image: quay.io/mongodb/mongodb-enterprise-operator:1.9.2
27 imagePullPolicy: Always
28 args:
29 - "-watch-resource=mongodb"
30 - "-watch-resource=opsmanagers"
31 - "-watch-resource=mongodbusers"
32 command:
33 - "/usr/local/bin/mongodb-enterprise-operator"
34 resources:
35 limits:
36 cpu: 1100m
37 memory: 1Gi
38 requests:
39 cpu: 500m
40 memory: 200Mi

For a full example of CPU and memory utilization resources and limits for the Kubernetes Operator Pod that satisfy parallel deployment of up to 50 MongoDB replica sets, see the mongodb-enterprise.yaml file.

The values for Pods hosting replica sets or sharded clusters map to the requests field for CPU and memory for the created Pod. These values are consistent with considerations stated for MongoDB hosts.

The Kubernetes Operator uses its allocated memory for processing, for the WiredTiger cache, and for storing packages during the deployments.

For production deployments, set the CPU and memory resources and limits for the MongoDB Pod as follows:

  • spec.podSpec.podTemplate.spec.containers.resources.requests.cpu to 0.25

  • spec.podSpec.podTemplate.spec.containers.resources.limits.cpu to 0.25

  • spec.podSpec.podTemplate.spec.containers.resources.requests.memory to 512M

  • spec.podSpec.podTemplate.spec.containers.resources.limits.memory to 512M

If you don't include the unit of measurement for CPUs, Kubernetes interprets it as the number of cores. If you specify m, such as 500m, Kubernetes interprets it as millis. To learn more, see Meaning of CPU.

The following abbreviated example shows the configuration with recommended CPU and memory bounds for each Pod hosting a MongoDB replica set member in your deployment.

Example

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4name: my-replica-set
5spec:
6 members: 3
7 version: 4.0.0-ent
8 service: my-service
9 ...
10
11 persistent: true
12 podSpec:
13 podTemplate:
14 spec:
15 containers:
16 - name: mongodb-enterprise-database
17 resources:
18 limits:
19 cpu: "0.25"
20 memory: 512M

For a full example of CPU and memory utilization resources and limits for Pods hosting MongoDB replica set members, see the replica-set-podspec.yaml file in the MongoDB Podspec Samples directory.

This directory also contains sample CPU and memory limits configurations for Pods used for:

Set the Kubernetes Operator and StatefulSets to distribute all members of one replica set to different nodes to ensure high availability.

The following abbreviated example shows affinity and multiple availability zones configuration.

Example

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-set
5spec:
6 members: 3
7 version: 4.2.1-ent
8 service: my-service
9 ...
10 podAntiAffinityTopologyKey: nodeId
11 podAffinity:
12 requiredDuringSchedulingIgnoredDuringExecution:
13 - labelSelector:
14 matchExpressions:
15 - key: security
16 operator: In
17 values:
18 - S1
19 topologyKey: failure-domain.beta.kubernetes.io/zone
20
21 nodeAffinity:
22 requiredDuringSchedulingIgnoredDuringExecution:
23 nodeSelectorTerms:
24 - matchExpressions:
25 - key: kubernetes.io/e2e-az-name
26 operator: In
27 values:
28 - e2e-az1
29 - e2e-az2

In this example, the Kubernetes Operator schedules the Pods deployment to the nodes which have the label kubernetes.io/e2e-az-name in e2e-az1 or e2e-az2 availability zones. Change nodeAffinity to schedule the deployment of Pods to the desired availability zones.

See the full example of multiple availability zones configuration in replica-set-affinity.yaml in the Affinity Samples directory.

This directory also contains sample affinity and multiple zones configurations for sharded clusters and standalone MongoDB deployments.

Back

Set Deployment Scope

Next

Prerequisites