Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Building a Multi-Environment Continuous Delivery Pipeline for MongoDB Atlas

Johannes Brännström, Pierre Petersson8 min read • Published Feb 10, 2022 • Updated Jan 23, 2024
AWSDockerAtlas
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty

Why CI/CD?

To increase the speed and quality of development, you may use continuous delivery strategies to manage and deploy your application code changes. However, continuous delivery for databases is often a manual process.
Adopting continuous integration and continuous delivery (CI/CD) for managing the lifecycle of a database has the following benefits:
  • An automated multi-environment setup enables you to move faster and focus on what really matters.
  • The confidence level of the changes applied increases.
  • The process is easier to reproduce.
  • All changes to database configuration will be traceable.

Why CI/CD for MongoDB Atlas?

MongoDB Atlas is a multi-cloud developer data platform, providing an integrated suite of cloud database and data services to accelerate and simplify how you build with data. MongoDB Atlas also provides a comprehensive API, making CI/CD for the actual data platform itself possible.
In this blog, we’ll demonstrate how to set up CI/CD for MongoDB Atlas, in a typical production setting. The intended audience is developers, solutions architects, and database administrators with knowledge of MongoDB Atlas, AWS, and Terraform.

Our CI/CD Solution Requirements

  • Ensure that each environment (dev, test, prod) is isolated to minimize blast radius in case of a human error or from a security perspective. MongoDB Atlas Projects and API Keys will be utilized to enable environment isolation.
  • All services used in this solution will use managed services. This to minimize the time needed to spend on managing infrastructure.
  • Minimize commercial agreements required. Use as much as possible from AWS and the Atlas ecosystem so that there is no need to purchase external tooling, such as HashiCorp Vault.
  • Minimize time spent on installing local dev tooling, such as git and Terraform. The solution will provide a docker image, with all tooling required to run provisioning of Terraform templates. The same image will be used to also run the pipeline in AWS CodeBuild.

Implementation

Enough talk—let’s get to the action. As developers, we love working examples as a way to understand how things work. So, here’s how we did it.

Prerequisites

First off, we need to have at least an Atlas account to provision Atlas and then somewhere to run our automation. You can get an Atlas account for free at mongodb.com. If you want to take this demo for a spin, take the time and create your Atlas account now. Next, you’ll need to create an organization-level API key. If you or your org already have an Atlas account you’d like to use, you’ll need the organization owner to create the organization-level API key.
Second, you’ll need an AWS account. For more information on how to create an AWS account, see How do I create an AWS account? For this demo, we’ll be using some for-pay services like S3, but you get 12 months free.
You will also need to have Docker installed as we are using a docker container to run all provisioning. For more information on how to install Docker, see Get Started with Docker. We are using Docker as it will make it easier for you to get started, as all the tooling is packaged in the container—such as AWS cli, mongosh, and Terraform.

What You Will Build

  • MongoDB Atlas Projects for dev, test, prod environments, to minimize blast radius in case of a human error and from a security perspective.
  • MongoDB Atlas Cluster in each Atlas project (dev, test, prod). MongoDB Atlas is a fully managed data platform for modern applications. Storing data the way it is accessed as documents makes developers more productive. It provides a document-based database that is cost-efficient and resizable while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It allows you to focus on your applications by providing the foundation of high performance, high availability, security, and compatibility they need. ✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply
    sign up for MongoDB Atlas via AWS Marketplace.
  • CodePipeline orchestrates the CI/CD database migration stages.
  • IAM roles and policies allow cross-account access to applicable AWS resources.
  • CodeCommit creates a repo to store the SQL statements used when running the database migration.
  • Amazon S3 creates a bucket to store pipeline artifacts.
  • CodeBuild creates a project in target accounts using Flyway to apply database changes.
  • VPC security groups ensure the secure flow of traffic between a CodeBuild project deployed within a VPC and MongoDB Atlas. AWS Private Link will also be provisioned.
  • AWS Parameter Store stores secrets securely and centrally, such as the Atlas API keys and database username and password.
  • Amazon SNS notifies you by email when a developer pushes changes to the CodeCommit repo.

Step 1: Bootstrap AWS Resources

Next, we’ll fire off the script to bootstrap our AWS environment and Atlas account as shown in Diagram 1 using Terraform.
You will need to use programmatic access keys for your AWS account and the Atlas organisation-level API key that you have created as described in the prerequisites.This is also the only time you’ll need to handle the keys manually.
1# Set your environment variables
2
3# You'll find this in your Atlas console as described in prerequisites
4export ATLAS_ORG_ID=60388113131271beaed5
5
6# The public part of the Atlas Org key you created previously
7export ATLAS_ORG_PUBLIC_KEY=l3drHtms
8
9# The private part of the Atlas Org key you created previously
10export ATLAS_ORG_PRIVATE_KEY=ab02313b-e4f1-23ad-89c9-4b6cbfa1ed4d
11
12
13# Pick a username, the script will create this database user in Atlas
14export DB_USER_NAME=demouser
15
16# Pick a project base name, the script will appended -dev, -test, -prod depending on environment
17export ATLAS_PROJECT_NAME=blogcicd6
18
19
20# The AWS region you want to deploy into
21export AWS_DEFAULT_REGION=eu-west-1
22
23
24# The AWS public programmatic access key
25export AWS_ACCESS_KEY_ID=AKIAZDDBLALOZWA3WWQ
26
27
28# The AWS private programmatic access key
29export AWS_SECRET_ACCESS_KEY=nmarrRZAIsAAsCwx5DtNrzIgThBA1t5fEfw4uJA
Once all the parameters are defined, you are ready to run the script that will create your CI/CD pipeline.
1# Clone solution code repository
2$ git clone https://github.com/mongodb-developer/atlas-cicd-aws
3$ cd atlas-cicd
4
5# Start docker container, which contains all the tooling e.g terraform, mongosh, and other,
6$ docker container run -it --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_DEFAULT_REGION -e ATLAS_ORG_ID -e ATLAS_ORG_PUBLIC_KEY -e ATLAS_ORG_PRIVATE_KEY -e DB_USER_NAME -e ATLAS_PROJECT_NAME -v ${PWD}/terraform:/terraform piepet/cicd-mongodb:46
7
8
9$ cd terraform
10
11# Bootstrap AWS account and Atlas Account
12$ ./deploy_baseline.sh $AWS_DEFAULT_REGION $ATLAS_ORG_ID $ATLAS_ORG_PUBLIC_KEY $ATLAS_ORG_PRIVATE_KEY $DB_USER_NAME $ATLAS_PROJECT_NAME base apply
When deploy.sh is invoked, provisioning of AWS resources starts, using Terraform templates. The resources created are shown in Diagram 1.
From here on, you'll be able to operate your Atlas infrastructure without using your local docker instance. If you want to blaze through this guide, including cleaning it all up, you might as well keep the container running, though. The final step of tearing down the AWS infrastructure requires an external point like your local docker instance.
Until you’ve committed anything, the pipeline will have a failed Source stage. This is because it tries to check out a branch that does not exist in the code repository. After you’ve committed the Terraform code you want to execute, you’ll see that the Source stage will restart and proceed as expected. You can find the pipeline in the AWS console at this url: https://eu-west-1.console.aws.amazon.com/codesuite/codepipeline/pipelines?region=eu-west-1

Step 2: Deploy Atlas Cluster

Next is to deploy the Atlas cluster (projects, users, API keys, etc). This is done by pushing a configuration into the new AWS CodeCommit repo.
If you’re like me and want to see how provisioning of the Atlas cluster works before setting up IAM properly, you can push the original github repo to AWS CodeCommit directly inside the docker container (inside the Terraform folder) using a bit of a hack. By pushing to the CodeCommit repo, AWS CodePipeline will be triggered and provisioning of the Atlas cluster will start.
1cd /terraform
2# Push default settings to AWS Codecommit
3./git_push_terraform.sh
To set up access to the CodeCommit repo properly, for use that survives stopping the docker container, you’ll need a proper git CodeCommit user. Follow the steps in the AWS documentation to create and configure your CodeCommit git user in AWS IAM. Then clone the AWS CodeCommit repository that was created in the bootstrapping, outside your docker container, perhaps in another tab in your shell, using your IAM credentials. If you did not use the “hack” to initialize it, it’ll be empty, so copy the Terraform folder that is provided in this solution, to the root of the cloned CodeCommit repository, then commit and push to kick off the pipeline. Now you can use this repo to control your setup! You should now see in the AWS CodePipeline console that the pipeline has been triggered. The pipeline will create Atlas clusters in each of the Atlas Projects and configure AWS PrivateLink.
Let’s dive into the stages defined in this Terraform pipeline file.
Deploy-Base This is basically re-applying what we did in the bootstrapping. This stage ensures we can improve on the AWS pipeline infrastructure itself over time.
This stage creates the projects in Atlas, including Atlas project API keys, Atlas project users, and database users.
Deploy-Dev
This stage creates the corresponding Private Link and MongoDB cluster.
Deploy-Test
This stage creates the corresponding Private Link and MongoDB cluster.
Deploy-Prod
This stage creates the corresponding Private Link and MongoDB cluster.
Gate
Approving means we think it all looks good. Perhaps counter intuitively but great for demos, it proceeds to teardown. This might be one of the first behaviours you’ll change. :)
Teardown
This decommissions the dev, test, and prod resources we created above. To decommission the base resources, including the pipeline itself, we recommend you run that externally—for example, from the Docker container on your laptop. We’ll cover that later.
As you advance towards the Gate stage, you’ll see the Atlas clusters build out. Below is an example where the Test stage is creating a cluster. Approving the Gate will undeploy the resources created in the dev, test, and prod stages, but keep projects and users.

Step 3: Make a Change!

Assuming you took the time to set up IAM properly, you can now work with the infrastructure as code directly from your laptop outside the container. If you just deployed using the hack inside the container, you can continue interacting using the repo created inside the Docker container, but at some point, the container will stop and that repo will be gone. So, beware.
Navigate to the root of the clone of the CodeCommit repo. For example, if you used the script in the container, you’d run, also in the container:
1cd /${ATLAS_PROJECT_NAME}-base-repo/
Then you can edit, for example, the MongoDB version by changing 4.4 to 5.0 in terraform/environment/dev/variables.tf.
1variable "cluster_mongodbversion" {
2 description = "The Major MongoDB Version"
3 default = "5.0"
4}
Then push (git add, commit, push) and you’ll see a new run initiated in CodePipeline.

Step 4: Clean Up Base Infrastructure

Now, that was interesting. Time for cleaning up! To decommission the full environment, you should first approve the Gate stage to execute the teardown job. When that’s been done, only the base infrastructure remains. Start the container again as in Step 1 if it’s not running, and then execute deploy_baseline.sh, replacing the word apply with destroy:
1# inside the /terraform folder of the container
2
3# Clean up AWS and Atlas Account
4./deploy_baseline.sh $AWS_DEFAULT_REGION $ATLAS_ORG_ID $ATLAS_ORG_PUBLIC_KEY $ATLAS_ORG_PRIVATE_KEY $DB_USER_NAME $ATLAS_PROJECT_NAME base destroy

Lessons Learned

In this solution, we have separated the creation of AWS resources and the Atlas cluster, as the changes to the Atlas cluster will be more frequent than the changes to the AWS resources.
When implementing infrastructure as code for a MongoDB Atlas Cluster, you have to consider not just the cluster creation but also a strategy for how to separate dev, qa, and prod environments and how to store secrets. This to minimize blast radius.
We also noticed how useful resource tagging is to make Terraform scripts portable. By setting tags on AWS resources, the script does not need to know the names of the resources but can look them up by tag instead.

Conclusion

By using CI/CD automation for Atlas clusters, you can speed up deployments and increase the agility of your software teams.
MongoDB Atlas offers a powerful API that, in combination with AWS CI/CD services and Terraform, can support continuous delivery of MongoDB Atlas clusters, and version-control the database lifecycle. You can apply the same pattern with other CI/CD tools that aren’t specific to AWS.
In this blog, we’ve offered an exhaustive, reproducible, and reusable deployment process for MongoDB Atlas, including traceability. A devops team can use our demonstration as inspiration for how to quickly deploy MongoDB Atlas, automatically embedding organisation best practices.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Quickstart

Building AI Multi-Agents with BuildShip and MongoDB


Nov 18, 2024 | 3 min read
Article

Best Practices and a Tutorial for Using Google Cloud Functions with MongoDB Atlas


Jun 13, 2023 | 13 min read
Article

Coronavirus Map and Live Data Tracker with MongoDB Charts


Nov 15, 2023 | 3 min read
Tutorial

CIDR Subnet Selection for MongoDB Atlas


Sep 23, 2022 | 2 min read
Table of Contents
  • Why CI/CD?