Docs Menu
Docs Home
/
MongoDB Atlas
/

Export Cloud Backup Snapshot

On this page

  • Supported Storage Services
  • How Atlas Exports Snapshots
  • Exported Data Format
  • Limitations
  • Required Access
  • Prerequisites
  • Export Management

Note

This feature is not available for M0 Free clusters, M2, and M5 clusters. To learn more about which features are unavailable, see Atlas M0 (Free Cluster), M2, and M5 Limits.

Atlas lets you export your Cloud Backup snapshots to an object storage service.

To learn how to manage automated backup policies and schedules, see Manage Backup Policies.

Atlas currently supports the following object storage services:

  • AWS S3 buckets

  • Azure Blob Storage

You can manually export individual snapshots or set up an export policy for automatic export of your snapshots. For automatic exports, you must specify a frequency in your export policy:

  • Daily

  • Weekly

  • Monthly

  • Yearly

Atlas automatically exports any backup snapshot with the frequency type that matches the export frequency. The exported result is a full backup of that snapshot.

Example

Consider the following:

  • A backup policy that sets a weekly and monthly snapshot schedule

  • An export policy that sets a monthly export frequency

Suppose, at the end of the month, the weekly and monthly snapshots happen on the same day. There would be 4 snapshots of which 3 would be weekly snapshots and the fourth snapshot, although treated as a weekly snapshot by Atlas, would also be the monthly snapshot because it happened on the same day. Atlas will export the monthly snapshot only because the export frequency matches the snapshot frequency for that snapshot. To export the weekly snapshots as well, update the export policy to export weekly snapshots also. If the export frequency is set to weekly, Atlas would export all 4 snapshots.

As the export progresses, you may see partial results in your object storage service.

Atlas persists documents in snapshots irrespective of Time to Live settings. You can access these documents from your snapshot past their Time to Live deadline.

Atlas charges $.125 per GB of data exported to the AWS S3 bucket or Azure Blob Storage Container, in addition to the data transfer cost incurred from AWS or Azure itself. Atlas compresses the data before exporting. To estimate the amount of data being exported, add up the dataSize of each database in your cluster. This total should correspond to the uncompressed size of your export, which would be the maximum cost incurred from Atlas for the data export operation.

Atlas uploads an empty file to /exported_snapshots/.permissioncheck when you:

After Atlas finishes exporting, Atlas uploads a metadata file named .complete and a metadata file named metadata.json for each collection.

Atlas uploads the metadata file named .complete in the following path on your object store:

/exported_snapshots/<orgUUID>/<projectUUID>/<clusterName>/<initiationDateOfSnapshot>/<timestamp>/

Note

By default, Atlas uses organization and project UUIDs in the path for the metadata files. To use organization and project names instead of UUIDs, set the useOrgAndGroupNamesInExportPrefix flag to true via the API. Atlas replaces any spaces with underscores (_) and removes any characters that might require special handling and characters to avoid from the organization and project names in the path.

The .complete metadata file is in JSON format and contains the following fields:

Field
Description
orgId
Unique 24-hexadecimal digit string that identifies the Atlas organization.
orgName
Name of the Atlas organization.
groupId
Unique 24-hexadecimal digit string that identifies the project in the Atlas organization.
groupName
Name of the Atlas project.
clusterUniqueId
Unique 24-hexadecimal digit string that identifies the Atlas cluster.
clusterName
Name of the Atlas project.
snapshotInitiationDate
Date when snapshot was taken.
totalFiles
Total number of files uploaded to the object store.
labels
Labels of the cluster whose snapshot was exported.
customData
Custom data, if any, that you specified when creating the export job.

Example

{
"orgId": "60512d6f65e4047fe0842095",
"orgName": "org1",
"groupId": "60512dac65e4047fe084220f",
"groupName": "group1",
"clusterUniqueId": "60512dac65e4047fe0842212",
"clusterName": "cluster0",
"snapshotInitiationDate": "2020-04-03T05:50:29.321Z"
"totalFiles": 23,
"labels": [
{
"key": "key1",
"value": "xyz"
},
{
"key": "key2",
"value": "xyzuio"
}
],
"customData": [
{
"key": "key1",
"value": "xyz"
},
{
"key": "key2",
"value": "xyzuio"
}
]
}

Atlas uploads the metadata.json file for each collection in the following path on your object store:

/exported_snapshots/<orgUUID>/<projectUUID>/<clusterName>/<initiationDateOfSnapshot>/<timestamp>/<dbName>/<collectionName>/metadata.json

Note

By default, Atlas uses organization and project UUIDs in the path for the metadata files. To use organization and project names instead of UUIDs, set the useOrgAndGroupNamesInExportPrefix flag via the API to true. Atlas replaces any spaces with underscores (_) and removes any characters that might require special handling and characters to avoid from the organization and project names in the path.

The metadata file is in JSON format and contains the following fields:

Field
Description
collectionName
Human-readable label that identifies the collection.
indexes
List of all the indexes on the collection in the format returned by db.collection.getIndexes command.
options
Configuration options defined on the collection. To learn more about the options, see db.createCollection() command.
type

(Optional) Type of collection. This field is only supported for time series collections, with a value of timeseries. Leave this field unset for standard collections.

Atlas doesn't support export of view type collections.

uuid
Collection's UUID. To learn more about UUID, see UUID.

Example

{
"options":{
"viewOn":"othercol",
"pipeline":[{"$project":{"namez":"$name"}}]
},
"indexes":[],
"collectionName":"viewcol",
"type":"view"
}
{
"options":{
"timeseries":{
"timeField":"timestamp",
"granularity":"seconds",
"bucketMaxSpanSeconds":{"$numberInt":"3600"}
}
},
"indexes":[],
"collectionName":"timeseriescol",
"type":"timeseries"
}
{
"indexes": [
{
"v":{"$numberInt":"2"},
"key":{
"_id":{"$numberInt":"1"}
},
"name":"_id_"
}
],
"uuid":"342c40a937c34c478bab03de8ce44f3e",
"collectionName":"somecol"
}

If an export job fails:

  • Atlas doesn't automatically try to export again.

  • Atlas doesn't remove any partial data in your object store.

Atlas uploads gzip-compressed :manual:Extended JSON (v2)
<reference/mongodb-extended-json>` documents. Atlas doesn't upload these documents in order. The following is the path to the files on your object store:
/exported_snapshots/<orgName>/<projectName>/<clusterName>/<initiationDateOfSnapshot>/<timestamp>/<dbName>/<collectionName>/<shardName>.<increment>.json.gz

Where:

<orgName>
Name of your Atlas organization.
<projectName>
Name of your Atlas project.
<clusterName>
Name of your Atlas cluster.
<initiationDateOfSnapshot>
Date when snapshot was taken.
<timestamp>
Timestamp when the export job was created.
<dbName>
Name of the database in the Atlas cluster.
<collectionName>
Name of the Atlas collection.
<shardName>
Name of the replica set. For sharded collections, this is the name of the primary shard.
<increment>
Count that is incremented as chunks are uploaded. Starts at 0.

You can't perform the following actions:

To manage your Cloud Backup snapshots, you must have Project Owner access to the project. Users with Organization Owner access must add themselves as a Project Owner to the project before they can manage Cloud Backup snapshots.

To export your Cloud Backup snapshots, you need an M10 or higher Atlas cluster with Cloud Backup enabled. In addition, to export to an object store, you must do the following:

  1. Configure AWS IAM role with STS:AssumeRole that grants Atlas access to your AWS resources. To learn more about configuring AWS access for Atlas, see Set Up Unified AWS Access.

  2. Configure AWS IAM role policy that grants Atlas write access or the S3:PutObject and S3:GetBucketLocation permissions to your AWS resources. To learn more about configuring write access to AWS resources, see Set Up Unified AWS Access.

    Example

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": "s3:GetBucketLocation",
    "Resource": "arn:aws:s3:::bucket-name"
    },
    {
    "Effect": "Allow",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::bucket-name/*"
    }
    ]
    }
  1. Set up Azure Service Principal with access policy for your Atlas project.

  2. Assign the Storage Blob Delegator and Storage Blob Data Contributor roles to your Azure Service Principal.

    To assign the roles to your Service Principal, you will need the following information:

    Role
    Description
    Storage Blob Delegator

    This allows the Service Principal to sign SAS tokens to access the Azure Storage Container. To assign this role, run the following command:

    az role assignment create --assignee-object-id <service-principal-id> --role "Storage Blob Delegator" --scope /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>
    Storage Blob Data Contributor

    This allows read, write, and delete blob access for the Azure Storage Container. To assign this role, run the following command:

    az role assignment create --assignee-principal-type ServicePrincipal --assignee-object-id <service-principal-id> --role "Storage Blob Data Contributor" --scope /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>/blobServices/default/containers/<container-name>

You can create and manage snapshot exports to AWS S3 Buckets from the Atlas CLI and Atlas Administration API or to Azure Blob Storage Containers from the Atlas Administration API.

Note

You can't export snapshots to Azure Blob Storage Containers using the Atlas CLI.

You can manage export jobs using the Atlas CLI by creating or viewing export jobs.

To export one backup snapshot for an M10 or higher Atlas cluster to an existing AWS S3 Bucket using the Atlas CLI, run the following command:

atlas backups exports jobs create [options]

To watch for a specific backup export job to complete using the Atlas CLI, run the following command:

atlas backups exports jobs watch <exportJobId> [options]

To learn more about the syntax and parameters for the previous commands, see the Atlas CLI documentation for atlas backups exports jobs create and atlas backups exports jobs watch.

Tip

See: Related Links

To list the cloud backup restore jobs for the project you specify using the Atlas CLI, run the following command:

atlas backups exports jobs list <clusterName> [options]

To return the details for the cloud backup restore job you specify using the Atlas CLI, run the following command:

atlas backups exports jobs describe [options]

To learn more about the syntax and parameters for the previous commands, see the Atlas CLI documentation for atlas backups exports jobs list and atlas backups exports jobs describe.

You can manage export buckets using the Atlas CLI by creating, viewing, or deleting export buckets.

To create an export destination for Atlas backups using an existing AWS S3 bucket using the Atlas CLI, run the following command:

atlas backups exports buckets create <bucketName> [options]

To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas backups exports buckets create.

To list the cloud backup restore buckets for the project you specify using the Atlas CLI, run the following command:

atlas backups exports buckets list [options]

To return the details for the cloud backup restore bucket you specify using the Atlas CLI, run the following command:

atlas backups exports buckets describe [options]

To learn more about the syntax and parameters for the previous commands, see the Atlas CLI documentation for atlas backups exports buckets list and atlas backups exports buckets describe.

To delete an export destination for Atlas backups using the Atlas CLI, run the following command:

atlas backups exports buckets delete [options]

To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas backups exports buckets delete.

To grant and manage cloud provider access and to create and manage snapshot export jobs, the API that you use must have the Project Owner role.

Use the following to manage export Buckets or Containers.

To grant access to AWS S3 Bucket or Azure Blob Storage Container for exporting snapshots, send a POST request to the Cloud Backups resource endpoint. This enables the AWS S3 Bucket or Azure Blob Storage Container to receive Atlas Cloud Backup snapshots. When sending the request to grant access, you must provide the following information:

  • Unique 24-hexadecimal character string that identifies the unified AWS access role ID that Atlas must use to access the AWS S3 Bucket. To learn more, see Set Up Unified AWS Access.

To retrieve all the AWS S3 Buckets and Azure Blob Storage Containers to which Atlas exports snapshots, send a GET request to the Cloud Backups resource endpoint.

To delete an Export Bucket, you must first disable automatic export of snapshots to the AWS S3 Bucket or Azure Blob Storage Container for all clusters in the project and then send a DELETE request to the Cloud Backups resource endpoint with the ID of the Export Bucket. If necessary, send a GET request to the endpoint to retrieve the export Bucket ID.

Use the following to manage export jobs.

To export one Atlas backup snapshot to an AWS S3 Bucket or Azure Blob Storage Container, send a POST request to the Cloud Backups resource endpoint with the ID of the snapshot to export and the ID of the AWS S3 Bucket or Azure Blob Storage Container.

To retrieve one snapshot export job by its ID, send a GET request to the Cloud Backups resource endpoint with the ID of the export job.

To retrieve all running snapshot export jobs, send a GET request to the Cloud Backups resource endpoint.

Back

Restore Using Encryption at Rest