Live Migrate (Pull) a Replica Set into Atlas (MongoDB Before 6.0.17)
On this page
Important
Feature unavailable in Serverless Instances
Serverless instances don't support this feature at this time. To learn more, see Serverless Instance Limitations.
If your source and destination clusters are running MongoDB 6.0.17+ or 7.0.13+, you can live migrate to Atlas using this procedure.
Atlas can pull a source replica set to an Atlas cluster using the legacy live migration process. Atlas syncs from the source to the destination cluster until you cut your applications over to the destination Atlas cluster.
Once you reach the cutover step in the following procedure, stop writes to the source cluster. Stop your application instances, point them to the Atlas cluster, and restart them.
Restrictions
You can't select an
M0
(Free Tier) orM2/M5
shared cluster as the source or destination for legacy live migration (pull). To migrate data from anM0
(Free Tier) orM2/M5
shared cluster to a paid cluster, change the cluster tier and type.Legacy live migration (pull) doesn't support MongoDB 8.0 or rapid releases as the source or destination cluster version.
Legacy live migration (pull) is not supported for sharded clusters.
To live migrate a source sharded cluster that runs a MongoDB version 6.0.17 or earlier, upgrade the cluster to 6.0.17+ or 7.0.13+ and then live migrate it to Atlas using this live migration procedure.
Legacy live migration (pull) doesn't support VPC peering or private endpoints for either the source or destination cluster.
Legacy live migration (pull) doesn't support migrating source replica sets that contain time series collections.
During live migration, Atlas disables host alerts.
Required Access
To live migrate your data, you must have Project Owner
access
to Atlas.
Users with Organization Owner
access must add themselves to the
project as a Project Owner
.
Prerequisites
Provide the hostname of the primary node to the live migration service.
When you migrate from MongoDB 4.4 or earlier to an Atlas cluster that runs MongoDB 5.0 or later, drop any geoHaystack indexes from your collections.
If the cluster runs with authentication, grant the user that will run the migration process the following permissions:
Read all databases and collections on the host.
Read access to the primary node's oplog.
To learn more Source Cluster Security.
Important
Source Cluster Readiness
To help ensure a smooth data migration, your source cluster should meet all production cluster recommendations. Check the Operations Checklist and Production Notes before beginning the Live Migration process.
Migration Path
Atlas live migration (pull) supports the following migration paths:
Source Replica Set MongoDB Version | Destination Atlas Replica Set MongoDB Version |
---|---|
4.2 | 6.0 |
4.4 | 6.0 |
5.0 | 6.0 |
Network Access
Configure network permissions for the following components:
Source Cluster Firewall Allows Traffic from Live Migration Server
Any firewalls for the source cluster must grant the MongoDB live migration server access to the source cluster.
The Atlas live migration process streams data through a MongoDB-controlled live migration server. Atlas provides the IP ranges of the MongoDB live migration servers during the live migration process. Grant these IP ranges access to your source cluster. This allows the MongoDB live migration server to connect to the source clusters.
Note
If your organization has strict network requirements and you cannot enable the required network access to MongoDB live migration servers, see Live Migrate a Community Deployment to Atlas.
Atlas Cluster Allows Traffic from Your Application Servers
Atlas allows connections to a cluster from hosts added to the project IP access list. Add the IP addresses or CIDR blocks of your application hosts to the project IP access list. Do this before beginning the migration procedure.
Atlas temporarily adds the IP addresses of the MongoDB migration servers to the project IP access list. During the migration procedure, you can't edit or delete this entry. Atlas removes this entry once the procedure completes.
To learn how to add entries to the Atlas IP access list, see Configure IP Access List Entries.
Pre-Migration Validation
Before starting the pull live migration procedure, Atlas runs validation checks on the source and destination clusters.
The source cluster is a replica set.
If the source cluster is a standalone, convert the standalone to a replica set first before using the pull-type live migration.
The destination Atlas cluster is a replica set.
Note
To run the migration process for a replica set, Atlas discovers the host names for the replica set based on the hostname you provide. If this fails, Atlas migrates the replica set using your provided reachable hostname. To learn more, see Network Access.
Source Cluster Security
Various built-in roles provide sufficient privileges. For example:
For source clusters a user must have the readAnyDatabase
,
clusterMonitor
, and backup
roles.
To verify that the database user who will run the live migration process
has these roles, run the db.getUser()
command on the admin
database.
use admin db.getUser("admin") { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "roles" : [ { "role" : "backup", "db" : "admin" }, { "role" : "clusterMonitor", "db" : "admin" } { "role" : "readAnyDatabase", "db" : "admin" } ] } ...
In addition, the database user from your source cluster must have
the role to read the oplog on your admin
database. To learn more,
see Oplog Access.
Specify the user name and password to Atlas when prompted by the live migration procedure.
Atlas only supports SCRAM for connecting to source clusters that enforce authentication.
Tip
To conceal credentials when migrating, consider adding a temporary user with the minimum required permissions for migration on the source cluster, and then deleting the user once you complete the migration process.
If the source cluster uses a different authentication mechanism to
connect, you can use mongomirror
to migrate data from the source cluster
to the destination Atlas cluster.
How MongoDB Secures its Live Migration Servers
In any pull-type live migration to Atlas, Atlas manages the server that runs the live migration and sends data from the source to the destination cluster.
MongoDB takes the following measures to protect the integrity and confidentiality of your data in transit to Atlas:
MongoDB encrypts data in transit between the Atlas-managed live migration server and the destination cluster. If you require encryption for data in transit between the source cluster and the Atlas-managed migration server, configure TLS on your source cluster.
MongoDB protects access to the Atlas-managed migration server instances as it protects access to any other parts of Atlas.
In rare cases where intervention is required to investigate and restore critical services, MongoDB adheres to the principle of least privilege and authorizes only a small group of privileged users to access your Atlas clusters for a minimum limited time necessary to repair the critical issue. MongoDB requires MFA for these users to log in to Atlas clusters and to establish an SSH connection via the bastion host. Granting this type of privileged user access requires approval by MongoDB senior management. MongoDB doesn't allow access by any other MongoDB personnel to your MongoDB Atlas clusters.
MongoDB allows use of privileged user accounts for privileged activities only. To perform non-privileged activities, privileged users must use a separate account. Privileged user accounts can't use shared credentials. Privileged user accounts must follow the password requirements described in Section 4.3.3 of the Atlas Security whitepaper.
You can restrict access to your clusters by all MongoDB personnel, including privileged users, in Atlas. If you choose to restrict such access and MongoDB determines that access is necessary to resolve a support issue, MongoDB must first request your permission and you may then decide whether to temporarily restore privileged user access for up to 24 hours. You can revoke the temporary 24-hour access grant at any time. Enabling this restriction may result in increased time for the response and resolution of support issues and, as a result, may negatively impact the availability of your Atlas clusters.
MongoDB reviews privileged user access authorization on a quarterly basis. Additionally, MongoDB revokes a privileged user's access when it is no longer needed, including within 24 hours of that privileged user changing roles or leaving the company. We also log any access by MongoDB personnel to your Atlas clusters, retain audit logs for at least six years, and include a timestamp, actor, action, and output. MongoDB uses a combination of automated and manual reviews to scan those audit logs.
To learn more about Atlas security, see the Atlas Security whitepaper. In particular, review the section "MongoDB Personnel Access to MongoDB Atlas Clusters".
Index Key Limits
If your MongoDB deployment contains indexes with keys which exceed the Index Key Limit, before you start the live migration procedure, modify indexes so that they do not contain oversized keys.
Considerations
Network Encryption
During pull live migrations, if the source cluster does not use TLS encryption for its data, the traffic from the source cluster to Atlas is not encrypted. Determine if this is acceptable before you start a pull live migration procedure.
Database Users and Roles
Atlas doesn't migrate any user or role data to the destination cluster.
If the source cluster doesn't use authentication, you must create a user in Atlas because Atlas doesn't support running without authentication.
If the source cluster enforces authentication, you must recreate the credentials that your applications use on the destination Atlas cluster. Atlas uses SCRAM for user authentication. To learn more, see Configure Database Users.
Destination Cluster Configuration
When you configure the destination cluster, consider the following:
The live migration process streams data through a MongoDB-managed live migration server. Each server runs on infrastructure hosted in the nearest region to the source cluster. The following regions are available:
- Europe
Frankfurt
Ireland
London
- Americas
Eastern US
Western US
- APAC
Mumbai
Singapore
Sydney
Tokyo
Use the cloud region for the destination cluster in Atlas that has the lowest network latency relative to the application servers or to your deployment hosted on the source cluster. Ideally, your application's servers should be running in the cloud in the same region as the destination Atlas cluster's primary region. To learn more, see Cloud Providers.
The destination cluster in Atlas must match or exceed the source deployment in terms of RAM, CPU, and storage. Provision a destination cluster of an adequate size so that it can accommodate both the migration process and the expected workload, or scale up the destination cluster to a tier with more processing power, bandwidth or disk IO.
To maximize migration performance, use at least an M40 cluster for the destination cluster. When migrating large data sets, use an M80 cluster with 6000 IOPS disks or higher.
You can also choose to temporarily increase the destination Atlas cluster's size for the duration of the migration process.
After you migrate your application's workload to a cluster in Atlas, contact support for assistance with further performance tuning and sizing of your destination cluster to minimize costs.
To avoid unexpected sizing changes, disable auto-scaling on the destination cluster. To learn more, see Manage Clusters.
To prevent unbounded oplog collection growth, and to ensure that the live migration's lag window stays within the bounds of the oplog replication lag window, set an oplog size to a large enough fixed value for the duration of the live migration process.
To learn more, see:
oplog Sizing in the Cluster-to-Cluster Sync documentation.
If you are observing performance issues even after you've followed these recommendations, contact support.
The destination Atlas cluster must be a replica set.
You can't select an
M0
(Free Tier) orM2/M5
shared-tier cluster as the destination cluster for live migration.Don't change the
featureCompatibilityVersion
flag while Atlas live migration is running.
Avoid Workloads on the Target Cluster
Avoid running any workloads, including those that might be running on namespaces that don't overlap with the live migration process, on the destination cluster. This action avoids potential locking conflicts and performance degradation during the live migration process.
Don't run multiple migrations to the same destination cluster at the same time.
Don't start the cutover process for your applications to the destination cluster while the live migration process is syncing.
Avoid Cloud Backups
Atlas stops taking on-demand cloud backup snapshots of the target cluster during live migration. Once you complete the cutover step in the live migration procedure on this page, Atlas resumes taking cloud backup snapshots based on your backup policy.
Avoid Namespace Changes
Don't make any namespace changes during the migration
process, such as using the
renameCollection
command
or executing an aggregation pipeline that includes the
$out
aggregation stage.
Avoid Elections
The live migration process makes a best attempt to continue a migration during temporary network interruptions and elections on the source or destination clusters. However, these events might cause the live migration process to fail. If the live migration process can't recover automatically, restart it from the beginning.
Migrate Your Cluster
Note
Staging and Production Migrations
Consider running this procedure twice. Run a partial migration that stops at the Perform the Cutover step first. This creates an up-to-date Atlas-backed staging cluster to test application behavior and performance using the latest driver version that supports the MongoDB version of the Atlas cluster.
After you test your application, run the full migration procedure using a separate Atlas cluster to create your Atlas-backed production environment.
Important
Avoid making changes to the source cluster configuration while the
live migration process runs, such as removing replica set members
or modifying mongod
runtime settings, such as
featureCompatibilityVersion
.
Pre-Migration Checklist
Before starting the live migration procedure:
If you don't already have a destination cluster, create a new Atlas deployment and configure it as needed. For complete documentation on creating an Atlas cluster, see Create a Cluster.
After your Atlas cluster is deployed, ensure that you can connect to it from all client hardware where your applications run. Testing your connection string helps ensure that your data migration process can complete with minimal downtime.
Download and install
mongosh
on a representative client machine, if you don't already have it.Connect to your destination cluster using the connection string from the Atlas UI. For more information, see Connect via
mongosh
.
Once you have verified your connectivity to your destination cluster, start the live migration procedure.
Procedure
Start the migration process.
Select a destination Atlas cluster.
Navigate to the destination Atlas cluster and click the . On the cluster list, the is beneath the cluster name. When you view cluster details, the is on the right-hand side of the screen, next to the Connect and Configuration buttons.
Click Migrate Data to this Cluster.
Atlas displays a walk-through screen with instructions on how to proceed with the live migration. The process syncs the data from your source cluster to the new destination cluster. After you complete the walk-through, you can point your application to the new cluster.
Collect the following details for your source cluster to facilitate the migration:
The hostname and port of the source cluster's primary member. Atlas only connects to the primary member of the source cluster by default. To increase resiliency and facilitate failover if needed, Atlas obtains the IP addresses of other source cluster nodes if these nodes have publicly available DNS records.
The username and password used to connect to the source cluster.
If the source cluster uses
TLS/SSL
and is not using a public Certificate Authority (CA), prepare the source cluster CA file.
Prepare the information as stated in the walk-through screen, then click I'm Ready To Migrate.
Atlas displays a walk-through screen that collects information required to connect to the source cluster.
Atlas displays the IP address of the MongoDB live migration server responsible for your live migration at the top of the walk-through screen. Configure your source cluster firewall to grant access to the displayed IP address.
Enter the hostname and port of the primary member of the source cluster into the provided text box. For example, enter
mongoPrimary.example.net:27017
.If the source cluster enforces authentication, enter a username and password into the provided text boxes.
See Source Cluster Security for guidance on the user permissions required by Atlas live migration.
If the source replica set uses
TLS/SSL
and is not using a public Certificate Authority (CA), toggle the switch Is encryption in transit enabled? and copy the contents of the source cluster CA file into the provided text box.If you wish to drop all collections on the destination replica set before beginning the migration process, toggle the switch marked Clear any existing data on your destination cluster? to Yes.
Click Validate to confirm that Atlas can connect to the source replica set.
If validation fails, check that:
You have added Atlas to the IP access list on your source replica set.
The provided user credentials, if any, exist on the source cluster and have the required permissions.
The Is encryption in transit enabled? toggle is enabled only if the source cluster requires it.
The CA file provided, if any, is valid and correct.
Click Start Migration to start the migration process.
Once the migration process begins, Atlas UI displays the Migrating Data walk-through screen for the destination Atlas cluster.
The walk-through screen updates as the destination cluster proceeds through the migration process. The migration process includes:
Copying collections from the source to the destination cluster.
Creating indexes on the destination cluster.
Tailing of oplog entries from the source cluster.
A lag time value displays during the final oplog tailing phase that represents the current lag between the source and destination clusters. This lag time may fluctuate depending on the rate of oplog generation on the source cluster, but should decrease over time as the live migration process copies the oplog entries to the destination cluster.
When the lag timer and the Prepare to Cutover button turn green, proceed to the next step.
Perform the cutover.
When Atlas detects that the source and destination clusters are nearly in sync, it starts an extendable 120 hour (5 day) timer to begin the cutover stage of the live migration procedure. If the 120 hour period passes, Atlas stops synchronizing with the source cluster. You can extend the time remaining by 24 hours by clicking Extend time below the <time> left to cut over timer.
If your migration is about to expire, Atlas sends you an email similar to the following example:
A migration to your Atlas cluster will expire in <number> hours! Navigate to your destination cluster to start the cutover process. If you don't take any action within <number> hours, the migration will be cancelled and you will need to start again. You can also extend the migration process if you need more time.
Once you are prepared to cut your applications over to the destination Atlas cluster, click Prepare to Cutover.
Atlas displays a series of pages, guiding you through each stage of the cutover process. Some of the items in the following list describe actions that you should do, other items describe the informational messages that Atlas displays.
Stop your application. This ensures that no more writes occur on the source cluster.
Atlas displays a screen with the following message: Almost done! Waiting for Atlas to clean up .... Atlas finalizes the migration. This can take a few hours. While finalizing the migration, Atlas completes metadata changes, removes the MongoDB Application Server subnets from the destination cluster's IP access list, and removes the database user that live migration used to import data to the destination cluster.
If the cutover process has been in progress for at least 12 hours, Atlas sends you an email that suggests you check on the migration process or contact support.
Atlas is still finalizing the migration, but the destination cluster is ready to accept writes. You can restart your application and connect to your new Atlas destination cluster now if you want to minimize downtime. Don't delete your source cluster until the migration is fully complete.
Click Connect to your new cluster. Atlas redirects you to the Connect to Atlas page, where you can choose a connection method.
Resume writes to the destination cluster.
Confirm that your application is working with the destination Atlas cluster and verify your data on the destination cluster.
If the migration succeeds, the You have successfully migrated to Atlas page displays.
Migration Support
If your migration fails at any stage of the live migration process, Atlas notifies you via email with a link to explore the migration results.
If you have any questions regarding migration support beyond what is covered in this documentation, or if you encounter an error during migration, please request support through the Atlas UI.
To file a support ticket:
In Atlas, go to the Project Support page.
If it's not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.
If it's not already displayed, select your desired project from the Projects menu in the navigation bar.
Next to the Projects menu, expand the Options menu, then click Project Support.
The Project Support page displays.
Request support.
Click Request Support.
For Issue Category, select
Help with live migration
.For Priority, select the appropriate priority. For questions, please select
Medium Priority
. If there was a failure in migration, please selectHigh Priority
.For Request Summary, please include
Live Migration
in your summary.For More details, please include any other relevant details to your question or migration error.
Click the Request Support button to submit the form.