Docs Menu
Docs Home
/
MongoDB Atlas
/ /

Configure Additional Settings

On this page

  • Select the MongoDB Version of the Cluster
  • Choosing a Release Cadence
  • Configure Backup Options for the Cluster
  • M2/M5 Tier Backup Options
  • M10+ Tier Backup Options
  • Termination Protection
  • Deploy a Sharded Cluster
  • About Shard Deployment
  • About Config Servers Deployment
  • About mongos Deployment
  • Configure the Number of Shards
  • Consideration for Upgrading a Replica Set to a Sharded Cluster
  • Enable BI Connector for Atlas
  • Read Preferences
  • Sampling Settings
  • Manage Your Own Encryption Keys
  • Prerequisites
  • Procedure
  • Configure Additional Options
  • Considerations
  • View and Edit Additional Settings
  • Set Minimum Oplog Window
  • Set Oplog Size
  • Enforce Index Key Limit
  • Allow Server-Side JavaScript
  • Enable Logging of Redacted and Anonymized Query Data
  • Set Minimum TLS Protocol Version
  • Require Indexes for All Queries
  • Default Write Concern
  • Set Transaction Lifetime
  • Set Chunk Migration Concurrency
  • Enable or Disable Fast Disk Pre-Warming
  • Set Default Timeout for Read Operations
  • Configure Replica Set Scaling Mode
  • Enable Log Redaction
  • Atlas-Managed Config Servers for Sharded Clusters

You can configure the following additional settings for your Atlas cluster.

Atlas supports creating clusters with the following tiers and MongoDB versions:

MongoDB Version
Supported on M10+
Supported on Free and Shared Tiers (M0, M2, M5)
MongoDB 5.0
MongoDB 6.0
MongoDB 7.0
Latest Release (auto-upgrades)

Important

If your cluster runs a release candidate of MongoDB, Atlas will upgrade the cluster to the stable release version when it is generally available.

To use a rapid release MongoDB version, you must select Latest Release for auto-upgrades. You can't select a specific rapid release version.

As new patch releases become available, Atlas upgrades to these releases via a rolling process to maintain cluster availability. During the upgrade to the next rapid release version, the cluster card in the Atlas UI Database Deployments page might show the FCV of your cluster instead of the MongoDB version to reflect the features that are currently available on your cluster.

To learn more about how Atlas handles end of life of major MongoDB versions, see What happens to Atlas clusters using a MongoDB version nearing end of life?

Important

Before you upgrade your cluster, refer to the current recommended best practices for major version upgrades.

To select the MongoDB version for your cluster, use the dropdown in the Additional Settings section of the cluster form.

You can upgrade an existing Atlas cluster to a newer major MongoDB version, if available, when you scale a cluster. However, you can't downgrade a cluster from one major version to a previous major version.

Important

If your project contains a custom role that uses actions introduced in a specific MongoDB version, you can't create a cluster with a MongoDB version less than that version unless you delete the custom role.

You can set your Atlas clusters to follow either a major release cadence or a rapid release cadence.

Free-tier and shared-tier clusters must follow a major release cadence. You can configure a dedicated-tier cluster to follow a major release cadence by selecting a specific MongoDB version from the dropdown in the Additional Settings section of the cluster form.

Atlas does not automatically upgrade clusters on the major release cadence. You must schedule a manual upgrade to each new major release as it enters general availability.

You can configure a dedicated-tier cluster to follow a rapid release cadence by selecting Latest Release from the dropdown in the Additional Settings section of the cluster form.

You can configure a cluster for rapid releases only if it is running the most recent major release of MongoDB. If your cluster is running a prior major release, manually upgrade to the most recent major release to enable the transition to rapid release.

Atlas uses the most recent MongoDB release for clusters that follow the rapid release cadence. Atlas automatically upgrades these clusters to the new major and rapid release versions via a rolling process to maintain cluster availability as each release becomes available. During the upgrade to the next rapid release version, the cluster in the Atlas UI Clusters page might show the FCV of your cluster instead of the MongoDB version to reflect the features that are currently available on your cluster.

Note

If you switch a cluster from the major release to the rapid release cadence, it will upgrade directly to the currently available rapid release. For example, if MongoDB 6.2 is the latest rapid release and you configure a cluster running 6.0 for rapid release, it will upgrade directly to MongoDB 6.2

You can revert a cluster that follows the rapid release cadence to the major release cadence by selecting the most recent major release from the Select a Version dropdown menu. However, you can only do this before the first rapid release of the year is available. After a cluster updates from a major release version to a rapid release version, you can't revert the cluster until the next major release.

To learn more about MongoDB versions, see MongoDB Versioning in the MongoDB Manual. For additional details about the rapid release cadence, see Understanding the MongoDB Stable API and Rapid Release Cadence.

This section describes the backup configuration options for your Atlas cluster.

Atlas automatically enables backups for M2 and M5 Shared clusters and you can't disable them. To learn more, see Shared Cluster Backups.

To enable backups for an M10+ Atlas cluster, toggle Turn on Backup (M10 and up) to Yes. If enabled, Atlas takes snapshots of your databases at regular intervals and retains them according to your project's retention policy.

Note

If you have a Backup Compliance Policy enabled, you can't disable Cloud Backup. You can't disable Continuous Cloud Backup if the Backup Compliance Policy has the Require Point in Time Restore to all clusters option set to On without MongoDB Support. To disable Continuous Cloud Backup, the security or legal representative specified for the Backup Compliance Policy must request support and complete an extensive verification process.

Atlas provides the following backup options for M10+ clusters:

Backup Option
Description
Atlas takes incremental snapshots of the data in your cluster and lets you restore the data from those snapshots. Atlas stores snapshots in the same cloud provider region as the replica set member targeted for snapshots.
After Atlas restores a snapshot, Atlas replays the oplog to restore a cluster from a particular point in time within a window specified in the backup policy.
Legacy Backup was deprecated on March 23, 2020.

To enable Termination Protection for a cluster, toggle Termination Protection to Yes.

If enabled, Atlas prevents users from deleting the cluster. To delete a cluster that has termination protection enabled, you must first disable termination protection. By default, Atlas disables termination protection for all clusters.

To learn more about terminating your cluster, see Terminate One Deployment.

Tip

You can configure Online Archive to move infrequently accessed data from your Atlas cluster to a MongoDB-managed read-only federated database instance instead of sharding your collection or upgrading your cluster tier. To learn more about Online Archive, see Manage Online Archives.

To deploy your cluster as a sharded cluster, toggle Shard your cluster (M30 and up) to Yes.

Sharded clusters support horizontal scaling and consist of shards, config servers and mongos routers. To learn more, see About Config Servers Deployment. Config servers must remain readable for sharded read operations to continue to function.

If you enable Atlas-managed config servers, Atlas may colocate config server data with application data instead of using a dedicated config server. To learn more, see Atlas-Managed Config Servers for Sharded Clusters.

Atlas deploys each shard as a three-node replica set, where each node deploys using the configured Cloud Provider & Region, Cluster Tier, and Additional Settings. Atlas deploys one mongod per shard node.

For cross-region clusters, the number of nodes per shard is equal to the total number of electable and read-only nodes across all configured regions. Atlas distributes the shard nodes across the selected regions.

For dedicated config servers, Atlas deploys the config servers as a three-node replica set. The config servers run on M30 cluster tiers. In multi-region clusters, config servers are distributed across regions.

For cross-region clusters, Atlas distributes the config server replica set nodes to ensure optimal availability. For example, Atlas might deploy the config servers across three distinct availability zones and three distinct regions if supported by the selected cloud service provider and region configuration. Config servers must remain readable for sharded read operations to continue to function. To learn more, see Config Server Availability.

If you enable Atlas-managed config servers, Atlas may colocate config server data with application data instead of using a dedicated config server. To learn more, see Atlas-Managed Config Servers for Sharded Clusters.

A regional outage or regional outage simulation that affects the highest priority regions in a sharded cluster could cause the cluster to become inoperable for read operations. To restore the config servers, do the following:

  • Configure a read preference that is suitable for querying secondary nodes for reads.

  • Reconfigure the cluster for regaining electable nodes.

Atlas deploys one mongos router for each node in each shard. For cross-region clusters, this allows clients using a MongoDB driver to connect to the geographically "nearest" mongos.

To calculate the number of mongos routers in a cluster, multiply the number of shards by the number of replica set nodes per shard.

You cannot convert a sharded cluster deployment to a replica set deployment.

To learn more about how the number of server instances affect cost, see Number of Nodes.

To learn more about sharded clusters, see Sharding in the MongoDB manual.

This field is visible only if the deployment is a sharded cluster.

Your cluster can have between 1 and 100 shards, inclusive.

To scale up a replica set to a multi-sharded cluster, you must scale up to a single shard cluster first, restart your application and reconnect to the cluster, and then add additional shards.

If you don't reconnect the application clients, your application may suffer from data outages.

After you scale up a replica set cluster to a single-shard cluster, you can set the number of shards to deploy with the sharded cluster.

If you are reducing the number of shards in your sharded cluster, Atlas removes shards in descending order based on the number in the "_id" field (see Sharded Cluster Configuration). For example, consider a sharded cluster with the following three shards:

  • "shard0"

  • "shard1"

  • "shard2"

If you set the number of shards to two, Atlas removes "shard2" from the cluster.

Important

When you remove a shard, Atlas uses the movePrimary command to move any unsharded databases in that shard to a remaining shard.

All sharded collections remain online and available during the shard removal process. However, read or write operations to unsharded collections during the movePrimary operation can result in unexpected behavior, including migration failure or data loss.

We recommend moving the primary shard for any databases containing unsharded collections before removing the shard.

For more information, see Remove Shards from an Existing Sharded Cluster.

Don't create a sharded cluster with a single shard for production environments. Single-shard sharded clusters don't provide the same benefits as multi-shard configurations. After you create a single-shard cluster, restart your application, reconnect to the cluster, and then add more shards to your cluster.

If your cluster tier is M30 or higher, you can upgrade your replica set deployment to a sharded cluster deployment.

To scale up a replica set to a multi-sharded cluster, you must scale up to a single shard cluster first, restart your application and reconnect to the cluster, and then add additional shards.

If you don't restart the application clients, your data might be inconsistent once Atlas begins distributing data across shards.

If you don't reconnect the application clients, your application may suffer from data outages.

  • If you are using a DNS Seed List connection string, your application automatically connects to the mongos for your sharded cluster.

  • If you are using a standard connection string, you must update your connection string to reflect your new cluster topology.

Important

Atlas BI Connector is approaching end-of-life. It will be deprecated and no longer supported in June 2025.

MongoDB is transitioning away from the BI Connector for Atlas to Atlas SQL. To learn about transitioning to the new interface, see Transition from Atlas BI Connector to Atlas SQL.

To enable BI Connector for Atlas for this cluster, toggle Enable Business Intelligence Connector (M10 and up) to Yes.

Note

The MongoDB Connector for Business Intelligence for Atlas (BI Connector) is only available for M10 and larger clusters.

The BI Connector is a powerful tool which provides users SQL-based access to their MongoDB databases. As a result, the BI Connector performs operations which may be CPU and memory intensive. Given the limited hardware resources on M10 and M20 cluster tiers, you may experience performance degradation of the cluster when enabling the BI Connector. If this occurs, scale up to an M30 or larger cluster or disable the BI Connector.

If enabled, select the node type from which BI Connector for Atlas should read.

The following table describes the available read preferences for BI Connector and their corresponding readPreference and readPreferenceTag connection string options.

BI Connector Read Preference
Description
readPreference
readPreferenceTags
Primary
Read from the primary node.
primary
None
Secondary
Read from secondary nodes.
secondary
{ nodeType : ELECTABLE } or { nodeType : READ_ONLY }
Analytics
secondary
{ nodeType : ANALYTICS }

The nodeType read preference tag dictates the type of node BI Connector for Atlas connects to. You can specify the following values for this option:

  • ELECTABLE restricts BI Connector to the primary and electable secondary nodes.

  • READ_ONLY restricts BI Connector to connecting to non-electable secondary nodes.

  • ANALYTICS restricts BI Connector to connecting to analytics nodes.

    Tip

    When you use the Analytics read preference, Atlas places BI Connector for Atlas on the same hardware as the analytics nodes that BI Connector for Atlas reads from.

    By isolating electable data-bearing nodes from the BI Connector for Atlas, electable nodes don't compete for resources with BI Connector for Atlas, thus improving cluster reliability and performance.

For high traffic production environments, connecting to the Secondary Node(s) or Analytics Node(s) may be preferable to connecting to the Primary Node.

For clusters with one or more analytics nodes, select Analytics Node to isolate BI Connector for Atlas queries from your operational workload and read from dedicated, read-only analytics nodes. With this option, electable nodes don't compete for resources with BI Connector for Atlas, thus improving cluster reliability and performance.

To generate a relational schema, the BI Connector requires sampling data from MongoDB.

You can't use a .drdl file, or use the mongodrdl command to replace the sampling stage in the Atlas BI Connector.

You can configure the following sampling settings:

BI Connector Option
Type
Description
Schema Sample Size
integer
Optional. The number of documents that the BI Connector samples for each database when it gathers schema information. To learn more, see the BI Connector documentation.
Sample Refresh Interval
integer
Optional. The frequency, in seconds, at which the BI Connector re-samples data to recreate the schema.To learn more, see the BI Connector documentation.

Note

This feature is not available for M0 Free clusters, M2, and M5 clusters. To learn more about which features are unavailable, see Atlas M0 (Free Cluster), M2, and M5 Limits.

Atlas encrypts all cluster storage and snapshot volumes, ensuring the security of all cluster data at rest (Encryption at Rest). Atlas Project Owners can configure an added layer of encryption on their data at rest using the MongoDB Encrypted Storage Engine and their Atlas-compatible Encryption at Rest provider.

Atlas supports the following Encryption at Rest providers:

To start managing your own encryption keys for this cluster, toggle Encryption using your Key Management (M10 and up) to Yes.

Atlas Encryption at Rest using your Key Management is available for M10+ replica set clusters. Atlas Encryption at Rest supports encrypting Back Up Your Cluster only. You can't enable Encryption at Rest on a cluster using Legacy Backups (Deprecated).

Managing your own encryption keys incurs an increase to the hourly run costs of your clusters. To learn more about Atlas billing for advanced security features, see Advanced Security.

Important

If Atlas can't access the Atlas project key management provider or the encryption key used to encrypt a cluster, that cluster becomes inaccessible and unrecoverable. Exercise extreme caution before you modify, delete, or disable an encryption key or the key management provider credentials that Atlas uses.

You can configure the following mongod runtime options on M10+ paid tier clusters.

Atlas dynamically modifies the Oplog Size for replica sets and sharded clusters. However, for the Minimum TLS Protocol Version and Allow Server-Side JavaScript settings, it performs a rolling restart of the shard members and the config server replica set. To learn more about how Atlas supports high availability during maintenance operations, see How does MongoDB Atlas deliver high availability?.

To view and edit these settings:

To update the advanced configuration settings for one cluster using the Atlas CLI, run the following command:

atlas clusters advancedSettings update <clusterName> [options]

To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas clusters advancedSettings update.

Tip

See: Related Links

To view and edit these settings with the Atlas UI, open the More Configuration Options under Additional Settings in the cluster form.

Modify the retention duration for oplog entries in the oplog of the cluster. By default, Atlas retains entries for 24 hours before the mongod removes them from the oplog.

This option corresponds to modifying the storage.oplogMinRetentionHours configuration file option for each mongod in the cluster.

To set the minimum oplog window:

  1. Verify that storage auto-scaling is enabled and that you didn't opt out of it. Atlas enables auto-scaling by default.

  2. Set the minimum oplog window to the desired value. If you don't set this value, Atlas retains oplog entries for 24 hours before the mongod removes them from the oplog.

You can set a fixed oplog size, which helps during live migration or during an intensive data load.

You can set the Set Oplog Size configuration setting only if you opt out of the cluster's storage auto-scaling.

For clusters that have storage auto-scaling enabled, you can set the Minimum Oplog Window instead. See Set Minimum Oplog Window. Atlas enables storage auto-scaling by default.

The minimum oplog size you can set is 990 megabytes. Atlas returns an error if the oplog size you choose leaves your cluster's disk with less than 25 percent of its capacity free.

To check the current oplog size and replication lag time:

  1. Connect to your cluster via mongosh.

  2. Authenticate as a user with the Atlas admin role.

  3. Run the rs.printReplicationInfo() method.

Atlas displays the current oplog size and replication lag time.

To set a fixed oplog size:

  1. Opt out of storage autoscaling.

  2. Set the Minimum Oplog Window to 0.

  3. Determine the size of the oplog that you need:

    • Monitor the lag time during the migration process in the Atlas UI.

    • If the lag time shown in the Atlas UI during migration approaches the replication lag time that you obtained using the rs.printReplicationInfo() method, increase the oplog size.

  4. Specify your desired Oplog Size in megabytes in the input box. This setting configures the uncompressed size of the oplog, not the size on disk.

    For sharded cluster deployments, this option modifies the oplog size of each shard in the cluster.

    This option corresponds to modifying the replication.oplogSizeMB configuration file option for each mongod in the cluster.

    Warning

    Reducing the size of the oplog requires removing data from the oplog. Atlas can't access or restore any oplog entries removed as a result of oplog reduction. Consider the ramifications of this data loss before you reduce the oplog.

Don't reduce the size of the oplog to increase the available disk space. Only the oplog collection (local.oplog.rs) can reclaim the space that reducing the oplog size saves. Other collections don't benefit from reducing oplog storage.

Enable or disable enforcement of the 1024-byte index key limit. Documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries don't exceed 1024 bytes.

If disabled, mongod writes documents that breach the limit but doesn't index them. This option corresponds to modifying the param.failIndexKeyTooLong parameter via the setParameter command for each mongod in the cluster.

Important

Index Key Limit

param.failIndexKeyTooLong was deprecated in MongoDB version 4.2 and is removed in MongoDB 4.4 and later. For MongoDB prior to 4.2, set this parameter to false.

Enable or disable execution of operations that perform server-side execution of JavaScript.

  • If your cluster runs a MongoDB version less than 5.0, this option corresponds to modifying the security.javascriptEnabled configuration file option for each mongod in the cluster.

  • If your cluster runs MongoDB version 5.0 or greater, this option corresponds to modifying the security.javascriptEnabled configuration file option for each mongod and mongos in the cluster.

  • If your cluster runs MongoDB version 8.0, Allow Server-Side JavaScript is disabled by default to improve security and performance. This option corresponds to the security.javascriptEnabled configuration file option for each mongod and mongos in the cluster.

Note

In MongoDB version 5.0 and later, security.javascriptEnabled applies to mongos' as well.

Include redacted and anonymized $queryStats output in MongoDB logs. $queryStats output does not contain literals or field values. Enabling this setting might impact the performance of your cluster.

Note

You can enable logging of query data only for Atlas clusters that run MongoDB 7.1 or later.

Set the minimum TLS version that the cluster accepts for incoming connections. This option corresponds to configuring the net.tls.disabledProtocols configuration file option for each mongod in the cluster.

Note

TLS 1.0 Deprecation

If you are considering this option as a method for enabling the deprecated Transport Layer Security (TLS) 1.0 protocol version, read What versions of TLS does Atlas support? before proceeding. Atlas deprecation of TLS 1.0 improves your security of data-in-transit and aligns with industry best practices. Enabling TLS 1.0 for any Atlas cluster carries security risks. Consider enabling TLS 1.0 only for as long as required to update your application stack to support TLS 1.1 or later.

Enable or disable the execution of queries that require a collection scan to return results. This option corresponds to modifying the notablescan parameter via the setParameter command for each mongod in the cluster.

Set the default level of acknowledgment requested from MongoDB for write operations for this cluster.

Starting with MongoDB 5.0, the default write concern for clusters is majority.

Set the maximum lifetime of multi-document transactions. This option corresponds to modifying the transactionLifetimeLimitSeconds parameter via the setParameter command for each mongod in the cluster.

Important

You can't set the transaction lifetime to less than one second.

The default transaction lifetime for clusters is 60 seconds.

For sharded Atlas clusters running MongoDB version 5.0.15 or 6.0.6 and higher versions, you can set the number of threads on the source and receiving shard to improve the performance of chunk migration. You can set the value to half the total number of CPU cores. To learn more, see chunkMigrationConcurrency.

To enable fast disk pre-warming for a cluster, toggle Allow Fast Disk Pre-Warming to Yes.

To disable fast disk pre-warming for a cluster, toggle Allow Fast Disk Pre-Warming to No.

Due to the design of the underlying cloud provider infrastructure, disk pre-warming occurs whenever Atlas needs to provision a new node in a cluster, such as when you add a new node to an existing region. Disk pre-warming temporarily uses a hidden secondary node.

Fast disk pre-warming is quicker than background disk warming. By default, Atlas enables fast disk pre-warming for your deployment. When disk pre-warming is enabled, Atlas hides the node and this prevents this node from running read operations.

Consider the following recommendations:

  • If you have workloads that seek consistent query latency, enable this setting.

  • If you have workloads that seek maximum availability guarantees over consistent query performance, and you require that the newly added or replaced node is immediately active and visible, disable this setting, and use a custom connection string with tags for the node that undergoes pre-warming, until the pre-warming process completes. Using this connection string prevents reads on the node while most of its IOPS are utilized by the pre-warming process.

For clusters running MongoDB version 8.0+, you can specify the default maximum timeout in milliseconds of all read operations for these clusters. This protects your database against unintentional long-running queries. This option corresponds to the cluster parameter defaultMaxTimeMS.

Modify the replica set scaling mode for your cluster. By default, Atlas scales nodes In Parallel By Workload Type, which means Atlas scales your analytics in parallel to your operational nodes.

Atlas can also scale a replica set with the In Parallel By Node Type and Sequential modes.

In Parallel By Node Type mode is for large, dynamic workloads requiring frequent and timely cluster tier scaling. In this mode, Atlas scales your electable nodes in parallel with your read-only and analytics nodes. This is the fastest scaling strategy, but it might impact latency of workloads when performing extensive secondary reads.

Sequential mode is for steady-state workloads and applications performing latency-sensitive secondary reads, which means Atlas scales all nodes sequentially.

Toggle this on to prevent logging of potentially sensitive information in field values. For more information, see Log Redaction.

A rolling restart is required for enabling and disabling log redaction.

Enable or disable Atlas management of the config server type for a new sharded cluster. An Atlas-managed config server automatically switches the config server type based on criteria for optimal performance and cost savings. If you don't enable an Atlas-managed config server for a sharded cluster, Atlas always uses a dedicated config server for the cluster.

For all 8.0 Atlas sharded clusters, Atlas-managed config servers are On by default. To disable Atlas-managed config servers, set the toggle to Off. If the cluster has less than four shards and embedded config servers, turning off Atlas-managed config servers immediately transitions the cluster to dedicated config servers.

For each new sharded cluster with Atlas-managed config servers enabled, Atlas deploys an embedded config server for clusters with less than four shards and a dedicated config server for clusters with more than three shards.

Embedded config servers colocate your application data with config data on a config shard. Embedded config server clusters cost less because they use fewer resources.

Dedicated config servers use a separate, dedicated config server replica set for config data. Your application data is not colocated with config data for dedicated config servers. Dedicated config server clusters cost more because they use an additional replica set.

To learn more about considerations for config server types, see Config Server Considerations.

If you enable Atlas-managed config servers, Atlas determines the initial cluster config server type as follows:

  • If the cluster shard count is greater than three, Atlas uses a dedicated config server.

  • If the cluster shard count is three or less, Atlas uses an embedded config server.

When you add or remove shards with Atlas-managed config servers enabled, Atlas automatically re-selects your sharded cluster's config server type using the same criteria.

All clusters with a version lower than MongoDB 8.0 use a dedicated config server.

Atlas will not change your config server type if you use any of the following features:

If you have a cluster with more than three shards that is unable to transition to a dedicated config server due to the use of these features, contact MongoDB Support to change your configuration server type.

If you enable Atlas-managed config servers, the following considerations apply:

  • For clusters running MongoDB 8.0 or later, replica set IDs don't reflect the type of data stored on the replica set.

    • Replica sets that contain the term shard in their replica set ID might store application data, config data, or both (for example: atlas-abc123-shard-0).

    • Replica sets that contain the term config in their replica set ID might store application data (for example: atlas-abc123-config-0).

  • You can restore snapshots from a cluster with a dedicated config server only to a cluster that also uses a dedicated config server.

  • You can restore snapshots from a cluster with an embedded config server only to a cluster that also uses an embedded config server.

Back

Auto-Scaling