Docs Menu
Docs Home
/

Deploy MongoDB Atlas with Terraform Modules

This guide walks you through deploying an enterprise-ready MongoDB Atlas environment using the official Terraform MongoDB Atlas Modules, which facilitate going from zero to a full Atlas deployment using Terraform.

Each module is a reusable building block that provisions Atlas resources alongside dependencies required for secure and private connectivity with cloud providers.

This guide provides examples to deploy the following resources in AWS, Azure, or Google Cloud:

  • An Atlas project and sharded cluster.

  • Cloud provider networking with PrivateLink connectivity.

  • Backup export to cloud storage.

  • An optional validation virtual machine to confirm end-to-end connectivity (not currently available in Google Cloud. A virtual machine will be available in a future update).

Note

The examples in this guide create all required cloud provider resources by default. If you require to use pre-existing resources, see the Bring Your Own Resources sections that correspond to your cloud provider:

Ensure you have the following prerequisites before starting this tutorial:

You need the following tools to go through the process outlined in this guide:

  • Terraform (From v1.9 onwards)

  • mongosh (Only required for validation steps. When deploying the validation VM, mongosh is installed by default)

This guide uses an Atlas Service Account for authentication. Service Accounts are the recommended authentication method for programmatic access.

  1. Sign in to or create your MongoDB Atlas account.

  2. Set your Service Account credentials as environment variables:

    export MONGODB_ATLAS_CLIENT_ID="<your-client-id>"
    export MONGODB_ATLAS_CLIENT_SECRET="<your-client-secret>"

    Note

    Service Accounts are the recommended authentication method for programmatic access with Terraform. For more information on setting up Service Accounts, see Grant Programmatic Access to an Organization.

Configure your AWS credentials using one of the following methods:

  • Environment variables:

    export AWS_ACCESS_KEY_ID="<your-access-key-id>"
    export AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
  • AWS CLI profile: Run aws configure to store credentials for the AWS CLI.

  • IAM role: When running in AWS, attach an IAM role to your instance or task with the appropriate permissions.

    Note

    For more information, see AWS IAM Authentication.

Configure your Azure credentials using one of the following methods:

  • Azure CLI:

    az login
  • Service Principal environment variables:

    export ARM_CLIENT_ID="<your-client-id>"
    export ARM_CLIENT_SECRET="<your-client-secret>"
    export ARM_SUBSCRIPTION_ID="<your-subscription-id>"
    export ARM_TENANT_ID="<your-tenant-id>"

Configure your Google Cloud credentials using one of the following methods:

  • Application Default Credentials:

    gcloud auth application-default login
  • Service account key file:

    export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
  • Service account impersonation: Set the service_account_email variable in terraform.tfvars to the email of the service account to impersonate.

Note

For more information, see Manage GCP Access.

1

Download the complete example for your cloud provider from the Atlas Examples repository and navigate to the example directory.

git clone https://github.com/terraform-mongodbatlas-modules/atlas-examples.git
cd atlas-examples/aws/atlas-aws-module-complete
git clone https://github.com/terraform-mongodbatlas-modules/atlas-examples.git
cd atlas-examples/azure/atlas-azure-module-complete
git clone https://github.com/terraform-mongodbatlas-modules/atlas-examples.git
cd atlas-examples/gcp/atlas-gcp-module-complete
2

Copy the example terraform.tfvars file and fill in your values:

cp terraform.tfvars.example terraform.tfvars

The following table describes the required variables. For a full list of available variables, see the variables.tf file in the example directory that corresponds to your cloud provider.

Common Variables

Variable
Description

atlas_org_id

Your MongoDB Atlas Organization ID. To find it, go to Organization > Settings in the Atlas UI.

atlas_project_name

Name for the new Atlas project.

atlas_cluster_name

Name for the Atlas cluster.

regions

List of regions where the cluster and PrivateLink endpoints are deployed. See cloud-provider-specific details below.

Cloud-Provider-Specific Variables

Variable
Description

aws_region

The primary AWS region for provider operations (for example, us-east-1).

regions[].name

The Atlas region name for the cluster shard (for example, US_EAST_1). For all supported values, see Cloud Providers and Regions.

regions[].vpc_id

The ID of an existing VPC in this region. DNS hostnames and DNS resolution must be enabled on the VPC.

regions[].subnet_ids

List of at least two private subnet IDs in different Availability Zones within the VPC, used for PrivateLink endpoint placement.

The following example shows a minimal terraform.tfvars for AWS:

atlas_org_id = "<YOUR_ATLAS_ORG_ID>"
atlas_project_name = "my-atlas-project"
atlas_cluster_name = "my-atlas-cluster"
aws_region = "us-east-1"
regions = [
{
name = "US_EAST_1"
vpc_id = "<YOUR_VPC_ID>"
subnet_ids = ["<YOUR_SUBNET_ID_1>", "<YOUR_SUBNET_ID_2>"]
}
]
Variable
Description

azure_resource_group_name

Azure Resource Group where Private Endpoints, backup storage, and the optional validation VM are created. The Resource Group must already exist.

azure_subscription_id

Your Azure Subscription ID. If omitted, the module uses the default subscription from your Azure credentials.

regions[].name

The Atlas region name for the cluster shard (for example, US_EAST_2). For all supported values, see Cloud Providers and Regions.

regions[].azure_location

The Azure location for this region (for example, eastus2). Only required for the first region entry.

regions[].subnet_id

The Azure subnet ID where Private Endpoints and the validation VM are created.

The following example shows a minimal terraform.tfvars for Azure:

atlas_org_id = "<YOUR_ATLAS_ORG_ID>"
atlas_project_name = "my-atlas-project"
atlas_cluster_name = "my-atlas-cluster"
azure_resource_group_name = "<YOUR_RESOURCE_GROUP>"
regions = [
{
name = "US_EAST_2"
azure_location = "eastus2"
subnet_id = "<YOUR_SUBNET_ID>"
}
]
Variable
Description

gcp_project_id

Your Google Cloud project ID.

regions[].name

The region name in Atlas format (for example, US_EAST_4) or Google Cloud format (for example, us-east4). The module normalizes both formats internally. For all supported values, see Cloud Providers and Regions.

regions[].subnetwork

The subnetwork self_link where PSC forwarding rules are created (for example, https://www.googleapis.com/compute/v1/projects/<PROJECT>/regions/<REGION>/subnetworks/<NAME>). The VPC network is derived from the subnetwork automatically.

The following example shows a minimal terraform.tfvars for Google Cloud:

atlas_org_id = "<YOUR_ATLAS_ORG_ID>"
atlas_project_name = "my-atlas-project"
atlas_cluster_name = "my-atlas-cluster"
gcp_project_id = "<YOUR_GCP_PROJECT_ID>"
regions = [
{
name = "US_EAST_4"
subnetwork = "<YOUR_SUBNETWORK_SELF_LINK>"
}
]
3

Run the following Terraform commands to initialize the working directory and deploy the infrastructure:

terraform init
terraform plan -var-file terraform.tfvars
terraform apply -var-file terraform.tfvars

terraform init downloads the required provider and module plugins. terraform plan previews the resources that will be created. Review the plan carefully before applying.

After terraform apply completes, the following resources are provisioned in your account:

  • MongoDB Atlas: Organization access, a new project, and a multi-region sharded cluster.

  • AWS IAM Role: A role that Atlas assumes to interact with your AWS account for Cloud Provider Access.

  • AWS VPC Endpoints: One VPC Endpoint per region, connected to the Atlas PrivateLink service for private, secure connectivity.

  • AWS S3 Bucket: A bucket for Atlas backup exports.

  • Validation VM (optional, enabled by default): An EC2 instance in the first region's subnet to verify Atlas connectivity over PrivateLink.

  • MongoDB Atlas: Organization access, a new project, and a multi-region sharded cluster.

  • Azure Service Principal: An Azure AD Service Principal that Atlas uses to interact with your Azure subscription.

  • Azure Private Endpoints: One Private Endpoint per region, connected to the Atlas PrivateLink service for private, secure connectivity.

  • Azure Storage Account: A storage account and blob container for Atlas backup exports.

  • Validation VM (optional, enabled by default): A Linux VM in the first region's subnet to verify Atlas connectivity over PrivateLink.

  • MongoDB Atlas: Organization access, a new project, and a multi-region sharded cluster.

  • GCP Access: An Atlas service account authorized to interact with your Google Cloud project.

  • PSC Forwarding Rules: One Google Cloud forwarding rule and compute address per region, connected to the Atlas PrivateLink service via Private Service Connect (PSC) for private, secure connectivity.

  • GCP Storage Bucket: A Google Cloud Storage bucket for Atlas backup exports.

4

You can validate your deployment either with a virtual machine (VM) that the example creates for you, or by connecting directly with mongosh using the PrivateLink connection string.

Connect and Test with a Virtual Machine

If you deployed the validation VM (enabled by default), you can use it to verify Atlas connectivity from within your private network. The VM has mongosh pre-installed and the connection string pre-configured.

  1. Note the validation_vm output for the instance ID and access commands:

    terraform output validation_vm
  2. Connect via AWS Systems Manager (SSM) Session Manager (default, no SSH required):

    aws ssm start-session --target <instance-id>

    Alternatively, if you set validation_vm_create_ec2_instance_connect_endpoint = true, connect via EC2 Instance Connect:

    aws ec2-instance-connect ssh --instance-id <instance-id> --os-user ubuntu
  3. Run the validation script on the VM to test connectivity:

    ./validate-atlas

    The script confirms that mongosh can connect, and runs CRUD operations against the cluster.

  1. Note the validation_vm output for the VM name and username:

    terraform output validation_vm
  2. Retrieve the VM password:

    terraform output -raw validation_vm_password
  3. Connect via the Azure Serial Console in the Azure portal, or via Azure Bastion if you provided an SSH key with validation_vm_ssh_key.

  4. Run the validation script on the VM to test connectivity:

    ./validate-atlas

    The script confirms that mongosh can connect, and runs CRUD operations against the cluster.

The Google Cloud example currently does not include a validation VM. Use the mongosh connection method below to verify your deployment. You must run mongosh from one of your Google Cloud subnetworks for this method to be successful.

Connect and Test with mongosh

You can also test connectivity from any host with access to your private network.

  1. Retrieve the connection string.

    After the deployment completes, retrieve the PrivateLink connection string from the Terraform outputs:

    terraform output connection_string

    The connection string uses the private endpoint SRV format and routes traffic through your PrivateLink connection.

  2. Run the following command, replacing <connection-string> with the value from the terraform output connection_string command:

    mongosh "<connection-string>"
  3. After connecting, run the following commands to write and retrieve a test document:

    db.test.insertOne({ msg: "Hello Atlas" })
    db.test.findOne()

A successful response confirms that your cluster is reachable and accepting read and write operations.

5

To tear down all resources provisioned by this deployment and avoid unwanted charges, run:

terraform destroy -var-file terraform.tfvars

Warning

terraform destroy permanently deletes all resources managed by this configuration, including the Atlas cluster and its data. Back up any data you want to keep before running this command.

Back

Get Started with Terraform

On this page