Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/

Multi-Cluster Sharded Cluster

On this page

  • Limitations
  • Prerequisites
  • Example Sharded Cluster Deployment
  • Shard Overrides
  • Customize Persistence and Statefulset Settings
  • External Access Configuration
  • Members, MemberConfig, Additional MongodConfig, AgentConfig
  • Config Server Overrides
  • Mongos Overrides

You can distribute MongoDB Sharded Clusters over multiple Kubernetes Clusters. With multi-cluster functionality, you can:

  • Improve the resilience of your deployment by distributing it across multiple Kubernetes clusters, each in a different geographic region.

  • Configure your deployment for geo sharding by deploying primary nodes of specified shards in different Kubernetes clusters that are located closer to the application or clients that depend on that data, reducing latency.

  • Tune your deployment for improved performance. For example, you can deploy read-only analytical nodes for all or specified shards in different Kubernetes clusters or with customized resource allocations.

Additionally, you can configure shard, mongos, and config server details at different levels. This means that you can define a top-level, default configuration for shards, config server and mongos, and customize them for each Kubernetes cluster independently. Additionally, it is possible to customise individual shards to suit your specific needs.

Important

The multi-cluster sharded cluster functionality makes it possible to deploy MongoDB resources across multiple Kubernetes clusters in multiple geographic regions; however, doing so will likely increase latency for replication operations.

To learn more about the specific configuration options for the multi-cluster sharded cluster topology, see the sharded cluster reference.

  • Hashicorp Vault support is not available for Kubernetes Secret injection.

  • The Kubernetes Operator doesn't automatically shift resources from failed Kubernetes clusters into healthy ones. In case of a member cluster failure, you must redistribute resources across remaining healthy Kubernetes clusters manually by updating the CRD resource deployed to the central cluster.

  • In case of a member cluster failure, you must first manually scale down failed clusters to zero to remove unhealthy processes. Then you can redistribute resources across remaining healthy Kubernetes clusters manually by updating the CRD resource deployed to the central cluster. See Disaster Recovery to learn more.

  • Prepare your Kubernetes clusters for a multi-cluster deployment.

  • Deploy a config map with Ops Manager or Cloud Manager and MongoDB project details to your central cluster, which Kubernetes Operator requires to deploy the MongoDB database resource.

  • Create credentials for Ops your Manager or Cloud Manager instance and your MongoDB organization and project.

When applied to your central cluster, the example below deploys a sharded MongoDB cluster with 3 shards that are configured as follows:

  • Each shard has nodes distributed over all Kubernetes clusters (1 node per cluster).

  • By default each shard's Replica Set:

    • Has 3 voting members in total deployed across 3 clusters (1 node in each cluster)

    • The primary shard is preferred to be in kind-e2e-cluster-1.

  • With a shardOverride for shard sc-0, we change the default values (specified in spec.shard) to the following:

    • 5 members in total

    • 2 members in kind-e2e-cluster-1

    • 2 members in kind-e2e-cluster-2

    • 1 member in kind-e2e-cluster-3

    • When possible, the primary shard is preferred to be in kind-e2e-cluster-2

  • Customized default storage settings for all shards in all clusters as defined in spec.shardPodSpec

  • A Config Server Replica Set with 3 members, with 1 member in each cluster

  • 3 mongos instances deployed in total, with 1 instance in each cluster

The default configuration consists of three shards, each with one member on each of the three clusters, for a total of three members per shard. But in the overrides we change shard sc-0 to have five members, two on cluster 1, two on cluster 2, and cluster 3 still has one shard as per the default.

This example configuration also shifts (with shardOverrides) the primary to cluster 2, for the shard sc-0, which can reduce latency for users operating in the region where cluster 2 is located. In this way, you still have resilience across the clusters (and their regions), but if the data in shard 0 is data most relevant to the users in in which cluster 2 is deployed, they'll experience lower latency.

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: sc
5spec:
6 topology: MultiCluster
7 type: ShardedCluster
8 # this deployment will have 3 shards
9 shardCount: 3
10 # you cannot specify mongodsPerShardCount, configServerCount and mongosCount
11 # in MultiCluster topology
12 version: 8.0.3
13 opsManager:
14 configMapRef:
15 name: my-project
16 credentials: my-credentials
17 persistent: true
18
19 shardPodSpec: # applies to all shards on all clusters
20 persistence:
21 single:
22 # all pods for all shards on all clusters will use that storage size in their
23 # PersistentVolumeClaim unless overridden in spec.shard.clusterSpecList or
24 # spec.shardOverrides.
25 storage: 10G
26
27 configSrvPodSpec: # applies to all config server nodes in all clusters
28 persistence:
29 multiple:
30 data:
31 storage: 2G
32 journal:
33 storage: 1G
34 logs:
35 storage: 1G
36
37 # consider this section as a default configuration for ALL shards
38 shard:
39 clusterSpecList:
40 - clusterName: kind-e2e-cluster-1
41 # each shard will have only one mongod process deployed in this cluster
42 members: 1
43 memberConfig:
44 - votes: 1
45 priority: "20" # we increase the priority to have primary in this cluster
46 - clusterName: kind-e2e-cluster-2
47 # one member in this cluster, no votes and priority defined means it'll get
48 # the default values votes=1, priority="1"
49 members: 1
50 - clusterName: kind-e2e-cluster-3
51 members: 1 # one member in this cluster
52
53 shardOverrides: # here you specify customizations for specific shards
54 # here you specify to which shard names the following configuration will
55 # apply
56 - shardNames:
57 - sc-0
58 clusterSpecList:
59 - clusterName: kind-e2e-cluster-1
60 # all fields here are optional
61 # shard "sc-0" will have two members instead of one, which was defined as the
62 # default for all shards in spec.shard.clusterSpecList[0].members
63 members: 2
64 memberConfig:
65 - votes: 1
66 # shard "sc-0" should not have primary in this cluster like every other shard
67 priority: "1"
68 - votes: 1
69 priority: "1"
70 - clusterName: kind-e2e-cluster-2
71 members: 2 # shard "sc-0" will have two members instead of one
72 memberConfig:
73 - votes: 1
74 # both processes of shard "sc-0" in this cluster will have the same
75 # likelihood to become a primary member
76 priority: "20"
77 - votes: 1
78 priority: "20"
79 # We need to specify the list of all clusters on which this shard will be
80 # deployed.
81 - clusterName: kind-e2e-cluster-3
82 # If the clusterName element is omitted here, it will be considered as an
83 # override for this shard, so that the operator shouldn't deploy any member
84 # to it.
85 # No fields are mandatory in here, though. In case a field is not set, it's
86 # not overridden and the default value is taken from a top level spec.shard
87 # settings.
88
89 configSrv:
90 # the same configuration fields are available as in
91 # spec.shard.clusterSpecList.
92 clusterSpecList:
93 - clusterName: kind-e2e-cluster-1
94 members: 1
95 - clusterName: kind-e2e-cluster-2
96 members: 1
97 - clusterName: kind-e2e-cluster-3
98 members: 1
99
100 mongos:
101 # the same configuration fields are available as in
102 # spec.shard.clusterSpecList apart from storage and replica-set related
103 # fields.
104 clusterSpecList:
105 - clusterName: kind-e2e-cluster-1
106 members: 1
107 - clusterName: kind-e2e-cluster-2
108 members: 1
109 - clusterName: kind-e2e-cluster-3
110 members: 1

To deploy a sharded cluster to multiple Kubernetes clusters, you apply the sharded cluster configuration (MongoDB custom resource yaml) to your operator cluster - the Kubernetes cluster on which your MongoDB Operator instance is deployed. This configuration's spec.shard definition is considered your deployment's base shard definition.

If you would like to customize specific shards on specific Kubernetes clusters, you can use shard overrides to update the base shard defintion for a given shard.

The following tables list fields that you can define in order to update or extend your base shard definition. The fields are listed in order of precedence. The topmost field in a given table represents the setting with the lowest precedence, and the bottommost field, if defined, overrides all other fields (e.g. specific shard, on a specific cluster).

Additionally, the override policy denoted for each field type describes whether that specific field is overridden by the newly defined value, or whether the complete object in which that field is defined is overidden. If the entire object is overridden, any fields defined in the base shard definition that are not also explicitly defined in the override definition are removed.The merge value indicates that a single field is updated, and the replace value indicates that the complete parent object is overridden.

ShardedClusterSpec Field
To Specify
Applies to

spec.shardPodSpec

Persistence and Pod Template

All pods

spec.shardSpecificPodSpec (deprecated)

Persistence and Pod Template

All pods

spec.shard.clusterSpecList.podSpec

Persistence (Pod Template field is ignored)

All shards in the cluster

spec.shard.clusterSpecList.statefulSet

Pod Template

All shards in the cluster

spec.shardOverrides.podSpec

Persistence (Pod Template field is ignored)

All shards in the cluster

spec.shardOverrides.statefulSet

Pod Template

One shard in the cluster

spec.shardOverrides.clusterSpecList.podSpec

Persistence (Pod Template field is ignored)

One shard in a specific cluster

spec.shardOverrides.clusterSpecList.statefulSet

Pod Template

One shard in a specific cluster

Note

Deprecation of ShardSpecificPodSpec

The ShardSpecificPodSpec field is deprecated, but is still supported. It was previously used to specify Persistence and Pod Template parameters, per shards, for single cluster sharded cluster. Now that it is deprecated, you should migrate to ShardOverrides.PodSpec, and ShardOverrides.StatefulSetConfiguration. See the provided example YAML file for guidance on updating from shardSpecificPodSpec and shardOverrides for a single cluster deployments.

The following example illustrates how to override custom pod templates and persistence settings in clusterSpecList.

1 shardPodSpec: # applicable to all shards in all clusters
2 persistence:
3 single:
4 storage: "5G"
5 podTemplate:
6 spec:
7 containers:
8 - name: mongodb-enterprise-database
9 resources:
10 requests:
11 cpu: 0.5
12 memory: 1.0G
13 limits:
14 cpu: 1.0
15 memory: 2.0G
16 shard:
17 clusterSpecList:
18 - clusterName: kind-e2e-cluster-1
19 members: 2
20 # The below statefulset override is applicable only to pods in kind-e2e-cluster-1
21 # Specs will be merged, the "request" field defined above will still be
22 # applied to containers in this cluster.
23 # However, limits will be replaced with below values, because
24 # clusterSpecList.statefulSet.spec.template has a
25 # higher priority than shardPodSpec.podTemplate
26 statefulSet:
27 spec:
28 template:
29 spec:
30 containers:
31 - name: mongodb-enterprise-database
32 resources:
33 limits:
34 cpu: 1.0
35 memory: 2.5G
36 # In clusterSpecList.podSpec, only persistence field must be used, the
37 # podTemplate field is ignored.
38 # In kind-e2e-cluster-1, we replace the persistence settings defined in
39 # shardPodSpec
40 podSpec:
41 persistence:
42 multiple:
43 journal:
44 storage: "6G"
45 data:
46 storage: "7G"
47 logs:
48 storage: "6G"
49 - clusterName: kind-e2e-cluster-2
50 members: 1

To learn more, you can review the complete file.

The following example illustrates how to define custom pod templates and persistence settings in shardOverrides.

1 # In clusterSpecList.podSpec, only persistence field must be used, the
2 # podTemplate field is ignored.
3 # In kind-e2e-cluster-1, we define custom persistence settings
4 podSpec:
5 persistence:
6 multiple:
7 journal:
8 storage: "5G"
9 data:
10 storage: "5G"
11 logs:
12 storage: "5G"
13 - clusterName: kind-e2e-cluster-2
14 members: 1
15
16 shardOverrides:
17 - shardNames: [ "pod-template-shards-1-2" ]
18 # This override will apply to shard of index 2
19 # Statefulset settings defined at this level (shardOverrides.statefulSet)
20 # apply to members of shard 2 in ALL clusters.
21 # This field has higher priority than shard.clusterSpecList.statefulSet, but
22 # lower than shardOverrides.clusterSpecList.statefulSet
23 # It has a merge policy, which means that the limits defined above for the
24 # mongodb-enterprise-database container field still apply to all members in
25 # that shard, except if overridden.
26 statefulSet:
27 spec:
28 template:
29 spec:
30 containers:
31 - name: sidecar-shard-2
32 image: busybox
33 command: [ "sleep" ]
34 args: [ "infinity" ]
35 clusterSpecList:
36 - clusterName: kind-e2e-cluster-1
37 members: 2
38 - clusterName: kind-e2e-cluster-2
39 members: 1
40 # The below statefulset override is applicable only to members of shard 2, in cluster 1
41 # Specs will be merged, the "limits" field defined above will still be applied
42 # to containers in this cluster together with the requests field below.
43 statefulSet:
44 spec:
45 template:
46 spec:
47 containers:
48 - name: mongodb-enterprise-database
49 resources:
50 requests: # We add a requests field in shard 2, cluster 1
51 cpu: 0.5
52 memory: 1.0G
53
54 podSpec:
55 # In shardOverrides.clusterSpecList.podSpec, only persistence field must be
56 # used, the podTemplate field is ignored.
57 persistence: # we assign additional disk resources in shard 2, cluster 1
58 multiple:
59 journal:
60 storage: "6G"
61 data:
62 storage: "6G"
63 logs:
64 storage: "6G"

To learn more, you can review the complete file.

The following example illustrates how to update from the deprecated shardSpecificPodSpec field to the new shardOverrides field.

1# This file is an example of how to migrate from the old deprecated
2# ShardSpecificPodSpec field to the new shardOverrides fields
3# for single cluster deployments.
4# The settings specified in shardOverrides are the exact equivalent to the
5# ones in shardSpecificPodSpec, showing how to replicate them
6apiVersion: mongodb.com/v1
7kind: MongoDB
8metadata:
9 name: shardspecificpodspec-migration
10 namespace: mongodb-test
11spec:
12 # There are 4 shards in this cluster, but the shardSpecificPodSpec field
13 # doesn't need to have on entry per shard, it can have less
14 shardCount: 4
15 mongodsPerShardCount: 2
16 mongosCount: 1
17 configServerCount: 3
18 topology: SingleCluster
19 type: ShardedCluster
20 version: 8.0.3
21 opsManager:
22 configMapRef:
23 name: my-project
24 credentials: my-credentials
25 persistent: true
26
27 shardPodSpec:
28 # default persistence configuration for all shards in all clusters
29 persistence:
30 single:
31 storage: "5G"
32 shardSpecificPodSpec: # deprecated way of overriding shards (array)
33 - persistence: # shard of index 0
34 single:
35 storage: "6G"
36 # Specify resources settings to enterprise database container in shard 0
37 podTemplate:
38 spec:
39 containers:
40 - name: mongodb-enterprise-database
41 resources:
42 requests:
43 cpu: 0.5
44 memory: 1G
45 limits:
46 cpu: 1.0
47 memory: 2.0G
48 - persistence: # shard of index 1
49 single:
50 storage: "7G"
51 - persistence: # shard of index 2
52 single:
53 storage: "7G"
54
55 # The below shardOverrides replicate the same shards configuration as the one
56 # specified above in shardSpecificPodSpec
57 shardOverrides:
58 - shardNames: [ "shardspecificpodspec-migration-0" ] # overriding shard #0
59 podSpec:
60 persistence:
61 single:
62 storage: "6G"
63 statefulSet:
64 spec:
65 template:
66 spec:
67 containers:
68 - name: mongodb-enterprise-database
69 resources:
70 requests:
71 cpu: 0.5
72 memory: 1G
73 limits:
74 cpu: 1.0
75 memory: 2.0G
76
77 # The ShardSpecificPodSpec field above has the same configuration for shards
78 # 1 and 2. It is possible to specify both shard names in the override and not
79 # duplicate that configuration
80 - shardNames: [ "shardspecificpodspec-migration-1", "shardspecificpodspec-migration-2" ]
81 podSpec:
82 persistence:
83 single:
84 storage: "7G"

To learn more, you can review the complete file.

Field
Which clusters
Which shards

spec.externalAccess

all

all

spec.shard.externalAccess

one

all

spec.shard.clusterSpecList.externalAccess

one

all

ShardedClusterSpec Field
To Specify
Applies to

spec.shard

not applicable

not applicable

spec.shard.clusterSpecList

applies

applies

spec.shardOverrides (single cluster only)

applies

applies

spec.sharOverrides

not applicable

not applicable

spec.shardOverrides.clusterSpecList

applies

applies

ShardedClusterSpec Field
To Specify
Applies to

spec.configSrv

not applicable

not applicable

spec.configSrv

applies

applies

spec.configSrv.clusterSpecList

not applicable

not applicable

spec.configSrv.clusterSpecList.podSpec

applies

ignored

spec.configSrv.clusterSpecList.statefulSet

not applicable

applies

1 configSrv:
2 clusterSpecList:
3 - clusterName: kind-e2e-cluster-1
4 members: 2
5 # The below statefulset override is applicable only to pods in kind-e2e-cluster-1
6 # Specs will be merged, the "request" field defined above will still be applied to containers in this cluster
7 # However, limits will be replaced with below values, because clusterSpecList.statefulSet.spec.template has a
8 # higher priority than configSrvPodSpec.podTemplate
9 statefulSet:
10 spec:
11 template:
12 spec:
13 containers:
14 - name: mongodb-enterprise-database
15 resources:
16 limits:
17 cpu: 1.0
18 memory: 2.5G
19 # In clusterSpecList.podSpec, only persistence field must be used, the podTemplate field is ignored.
20 podSpec: # In kind-e2e-cluster-1, we replace the persistence settings defined in configSrvPodSpec
21 persistence:
22 multiple:
23 journal:
24 storage: "6G"
25 data:
26 storage: "7G"
27 logs:
28 storage: "6G"
29 - clusterName: kind-e2e-cluster-2
30 members: 1
31 # doc-highlight-end: configSrv
32 mongos:
33 clusterSpecList:
34 - clusterName: kind-e2e-cluster-1
35 members: 2
36 - clusterName: kind-e2e-cluster-2
37 members: 1
38
39 shard:
40 clusterSpecList:
41 - clusterName: kind-e2e-cluster-1
42 members: 2
43 - clusterName: kind-e2e-cluster-2
44 members: 1

To learn more, you can review the complete file.

ShardedClusterSpec Field
To Specify
Applies to

ConfigSrvSpec

not applicable

not applicable

ConfigSrvPodSpec

applies

not applicable

ConfigSrvSpec.ClusterSpecList

not applicable

applies

ConfigSrvSpec.ClusterSpecList.PodSpec

ignored

not applicable

ConfigSrvSpec.ClusterSpecList.StatefulSetConfiguration

applies

not applicable

Back

Connect from Outside Kubernetes