Introducing: Multi-Kubernetes Cluster Deployment Support
Resilience and scalability are critical for today's production applications. MongoDB and Kubernetes are both well known for their ability to support those needs to the highest level. TA single Kubernetes cluster is typically limited to a single region, so to better enable developers using MongoDB and Kubernetes, we’ve introduced a series of updates and capabilities that makes it possible to manage MongoDB across multiple Kubernetes clusters. Since those Kubernetes clusters can be located in different regions, this offers new levels of resilience and control over where your data lives. In addition to the previously released support for running MongoDB replica sets and Ops Manager across multiple Kubernetes clusters, we're excited to announce the public preview release of support for Sharded Clusters spanning multiple Kubernetes clusters (GA to follow in November 2024).
Support for deployment across multiple Kubernetes clusters is facilitated through the Enterprise Kubernetes Operator. As a recap for anyone unaware, the Enterprise Operator automates the deployment, scaling, and management of MongoDB clusters in Kubernetes. It simplifies database operations by handling tasks such as configuration, resizing, upgrades, and failover, whilst ensuring consistent performance and reliability in the Kubernetes environment.
Multi-Kubernetes cluster deployment support enhances availability, resilience, and scalability for critical MongoDB workloads, empowering developers to efficiently manage these workloads within Kubernetes. This approach unlocks the highest level of availability and resilience by allowing shards to be located closer to users and applications, increasing geographical flexibility and reducing latency for globally distributed applications.
Deploying replica sets across multiple Kubernetes clusters
MongoDB replica sets are engineered to ensure high availability, data redundancy, and automated failover in database deployments. A replica set consists of multiple MongoDB instances—one primary and several secondary nodes—all maintaining the same dataset. The primary node handles all write operations, while the secondary nodes replicate the data and are available to take over as primary if the original primary node fails. This architecture is critical for maintaining continuous data availability, especially in production environments where downtime can be costly.
Support for deploying MongoDB replica sets across multiple Kubernetes clusters helps remove the Kubernetes cluster itself as a single point of failure. Deploying MongoDB replica sets across multiple Kubernetes clusters enables you to distribute your data, not only across nodes in the Kubernetes cluster, but across different clusters and geographic locations, ensuring your deployments operational (even if one or more Kubernetes clusters or locations fail) and facilitating faster disaster recovery.
To learn more about how to deploy replica sets across multiple Kubernetes clusters using the Enterprise Kubernetes Operator, visit our documentation.
Sharding MongoDB across multiple Kubernetes clusters
While replica sets duplicate data for resilience (and higher read rates), MongoDB sharded clusters divide the data up between shards, each of which is effectively a replica set, providing resilience for each portion of the data. Crucially, this also helps your database handle large datasets and higher-throughput operations since each shard has a primary member handling write operations to that portion of the data; this allows MongoDB to scale up the write throughput horizontally, rather than requiring vertical scaling of every member of a replica set. In a Kubernetes environment, each shard can now be deployed across multiple Kubernetes clusters, giving every shard higher resilience in the event of a loss of a Kubernetes cluster or an entire geographic location. This also offers the ability to locate shards or their primaries in the same region as the applications or users accessing that portion of the data, reducing latency and improving user experience. Sharding is particularly useful for applications with large datasets and those requiring high availability and resilience as they grow.
Support for sharding MongoDB across multiple Kubernetes clusters is currently in public preview and will be generally available in November.
Deploying Ops Manager across multiple Kubernetes clusters
Ops Manager is the self-hosted management erver that supports automation, monitoring, and backup of MongoDB on your own infrastructure.
Ops Manager's most critical function is backup of MongoDB deployments, and deploying it across multiple Kubernetes clusters greatly improves resilience and disaster recovery for your MongoDB deployments in Kubernetes. With Ops Manager distributed across several Kubernetes clusters, you can ensure that backups of deployments remain robust and available, even if one Kubernetes cluster or site fails. Furthermore, it allows Ops Manager to efficiently manage and monitor MongoDB deployments that are themselves distributed across multiple clusters, improving central oversight of your deployments.
To learn more about how to deploy Ops Manager across multiple Kubernetes clusters using the Enterprise Kubernetes Operator, visit our documentation.
To leverage multi-Kubernetes-cluster support, you can get started with the Enterprise Kubernetes Operator.