Restore a Sharded Cluster from a Snapshot
On this page
- OAuth 2.0 authentication for programmatic access to Cloud Manager is available as a Preview feature.
- The feature and the corresponding documentation might change at any time during the Preview period. To use OAuth 2.0 authentication, create a service account to use in your requests to the Cloud Manager Public API.
When you restore a cluster from a snapshot, Cloud Manager provides you with restore files for the selected restore point.
To learn about the restore process, see Restore Overview.
Considerations
Review change to BinData
BSON sub-type
The BSON specification changed the
default subtype for the BSON binary datatype (BinData
) from 2
to 0
. Some binary data stored in a snapshot may be BinData
subtype 2. The Backup automatically detects and converts snapshot
data in BinData
subtype 2 to BinData
subtype 0. If your
application code expects BinData
subtype 2, you must update your
application code to work with BinData
subtype 0.
Restore using settings given in restoreInfo.txt
The backup restore file includes a metadata file named
restoreInfo.txt
. This file captures the options the database used
when the snapshot was taken. The database must be run with the listed
options after it has been restored. This file contains:
Group name
Replica Set name
Cluster ID (if applicable)
Snapshot timestamp (as Timestamp at UTC)
Restore timestamp (as a BSON Timestamp at UTC)
Last Oplog applied (as a BSON Timestamp at UTC)
MongoDB version
Storage engine type
mongod
startup options used on the database when the snapshot was taken
Snapshots when Agent Cannot Stop Balancer
Cloud Manager displays a warning next to cluster snapshots taken while the balancer is enabled. If you restore from such a snapshot, you run the risk of lost or orphaned data. For more information, see Snapshots when Agent Can't Stop Balancer.
Backup Considerations
All FCV databases must fulfill the appropriate backup considerations.
Encryption Considerations
Disable Client Requests to MongoDB during Restore
You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:
Restore to new systems with new hostnames and reconfigure your application code once the new deployment is running, or
Ensure that the MongoDB deployment will not receive client requests while you restore data.
Restore a Snapshot
To have Cloud Manager automatically restore the snapshot:
In MongoDB Cloud Manager, go to the Continuous Backup page for your project.
If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.
If it's not already displayed, select your desired project from the Projects menu in the navigation bar.
Click Continuous Backup in the sidebar.
The Continuous Backup page displays.
Select the restore point.
Choose the point from which you want to restore your backup.
Restore TypeDescriptionActionSnapshotAllows you to choose one stored snapshot.Select an existing snapshot to restore.Point In TimeCreates a custom snapshot that includes all operations up to but not including the selected time. By default, the Oplog Store stores 24 hours of data.
For example, if you select
12:00
, the last operation in the restore is11:59:59
or earlier.IMPORTANT: In FCV 4.0, you cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup. This note does not apply to FCV 4.2 or later.
Select a Date and Time.Oplog TimestampCreates a custom snapshot that includes all operations up to and including the entered Oplog timestamp. The Oplog Timestamp contains two fields:
TimestampTimestamp in the number of seconds that have elapsed since the UNIX epoch
IncrementOrder of operation applied in that second as a 32-bit ordinal.Type an Oplog Timestamp and Increment.
Run a query against
local.oplog.rs
on your replica set to find the desired timestamp.Click Next.
Choose to restore the files to another cluster.
Click Choose Cluster to Restore to.
Complete the following fields:
FieldActionProjectCluster to Restore toSelect a cluster to which you want to restore the snapshot.
Cloud Manager must manage the target sharded cluster.
WARNING: Automation removes all existing data from the cluster. It preserves all backup data and snapshots for the existing cluster.
Click Restore.
Cloud Manager notes how much storage space the restore requires in its UI.
Important
Rotate Master Key after Restoring Snapshots Encrypted with AES256-GCM
If you restore an encrypted snapshot that Cloud Manager encrypted with AES256-GCM, rotate your master key after completing the restore.
The manual restore process assumes that:
The target host has no data in place.
You have not used an encrypted snapshot.
You have not enabled two-factor authentication.
Warning
Restore the snapshot manually only if you can't run an automatic restore. If you determine that you must use a manual restore, contact MongoDB Support for help. This section provides a high-level overview of the stages in the manual restore procedure.
The manual restore process has the following high-level stages that you perform with help from MongoDB Support:
Connect to each replica set and the Config Server Replica Set (CSRS) with either the legacy
mongo
shell ormongosh
.(Optional). Review the configuration file of each replica set and CSRS. After you complete the restore process, you can reconstruct the configuration on the restored replica sets using the saved configuration files.
Stop all
mongod
processes running on the target hosts.Provision enough storage space to hold the restored data.
Prepare directories for data and logs.
Add a configuration file to your MongoDB Server directory with the target host's storage and log paths, and configuration for replicas and sharding roles.
The full manual restore procedure can be found in the MongoDB Server 4.2 documentation. For MongoDB 4.4 or later deployments, refer to the corresponding versions of the manual.
To have Cloud Manager automatically restore the snapshot:
In MongoDB Cloud Manager, go to the Continuous Backup page for your project.
If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.
If it's not already displayed, select your desired project from the Projects menu in the navigation bar.
Click Continuous Backup in the sidebar.
The Continuous Backup page displays.
Select the restore point.
Choose the point from which you want to restore your backup.
Restore TypeDescriptionActionSnapshotAllows you to choose one stored snapshot.Select an existing snapshot to restore.Point In TimeAllows you to choose a date and time as your restore time objective for your snapshot. By default, the Oplog Store stores 24 hours of data.
For example, if you select
12:00
, the last operation in the restore is11:59:59
or earlier.IMPORTANT: If you are restoring a sharded cluster that runs
FCV
4.0 or earlier, you must enable cluster checkpoints to perform a PIT restore on a sharded cluster. If no checkpoints that include your date and time are available, Cloud Manager asks you to choose another point in time.IMPORTANT: You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.
Select a Date and Time.Click Next.
If you are restoring a sharded cluster that runs
FCV
4.0 or earlier and you chose Point In Time:A list of Checkpoints closest to the time you selected appears.
To start your point in time restore, you may:
Choose one of the listed checkpoints, or
Click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.
Choose to restore the files to another cluster.
Click Choose Cluster to Restore to.
Complete the following fields:
FieldActionProjectCluster to Restore toSelect a cluster to which you want to restore the snapshot.
Cloud Manager must manage the target sharded cluster.
WARNING: Automation removes all existing data from the cluster. It preserves all backup data and snapshots for the existing cluster.
Click Restore.
Cloud Manager notes how much storage space the restore requires in its console.
Important
Rotate Master Key after Restoring Snapshots Encrypted with AES256-GCM
If you restore an encrypted snapshot that Cloud Manager encrypted with AES256-GCM, rotate your master key after completing the restore.
The manual restore process assumes that:
The target host has no data in place.
You have not used an encrypted snapshot.
You have not enabled two-factor authentication.
Warning
Restore the snapshot manually only if you can't run an automatic restore. If you determine that you must use a manual restore, contact MongoDB Support for help. This section provides a high-level overview of the stages in the manual restore procedure.
The manual restore process has the following high-level stages that you perform with help from MongoDB Support:
Connect to each replica set and the Config Server Replica Set (CSRS) with either the legacy
mongo
shell ormongosh
.(Optional). Review the configuration file of each replica set and CSRS. After you complete the restore process, you can reconstruct the configuration on the restored replica sets using the saved configuration files.
Stop all
mongod
processes running on the target hosts.Provision enough storage space to hold the restored data.
Prepare directories for data and logs.
Add a configuration file to your MongoDB Server directory with the target host's storage and log paths, and configuration for replicas and sharding roles.
The full manual restore procedure can be found in the MongoDB Server documentation.