Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

MongoDB Atlas with Terraform

SM
Samuel Molling9 min read • Published Jan 23, 2024 • Updated Jan 23, 2024
TerraformAtlas
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
In this tutorial, I will show you how to start using MongoDB Atlas with Terraform and create some simple resources. This first part is simpler and more introductory, but in the next article, I will explore more complex items and how to connect the creation of several resources into a single module. The tutorial is aimed at people who want to maintain their infrastructure as code (IaC) in a standardized and simple way. If you already use or want to use IaC on the MongoDB Atlas platform, this article is for you.
What are modules?
They are code containers for multiple resources that are used together. They serve several important purposes in building and managing infrastructure as code, such as:
  1. Code reuse.
  2. Organization.
  3. Encapsulation.
  4. Version management.
  5. Ease of maintenance and scalability.
  6. Sharing in the community.
Everything we do here is contained in the provider/resource documentation.
Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as an S3, GCS, Azurerm, etc…

Creating a project

In this first step, we will dive into the process of creating a project using Terraform. Terraform is a powerful infrastructure-as-code tool that allows you to manage and provision IT resources in an efficient and predictable way. By using it in conjunction with MongoDB Atlas, you can automate the creation and management of database resources in the cloud, ensuring a consistent and reliable infrastructure.
To get started, you'll need to install Terraform in your development environment. This step is crucial as it is the basis for running all the scripts and infrastructure definitions we will create. After installation, the next step is to configure Terraform to work with MongoDB Atlas. You will need an API key that has permission to create a project at this time.
To create an API key, you must:
  1. Select Access Manager at the top of the page, and click Organization Access.
  2. Click Create API Key. Organization Access Manager for your organization
  3. Enter a brief description of the API key and the necessary permission. In this case, I put it as Organization Owner. After that, click Next. Screen to create your API key
  4. Your API key will be displayed on the screen. Screen with information about your API key
  5. Release IP in the Access List (optional): If you have enabled your organization to use API keys, the requestor's IP must be released in the Access List; you must include your IP in this list. To validate whether it is enabled or not, go to Organization Settings -> Require IP Access List for the Atlas Administration API. In my case, it is disabled, as it is just a demonstration, but in case you are using this in an organization, I strongly advise you to enable it. Validate whether the IP Require Access List for APIs is enabled in Organization Settings
After creating an API key, let's start working with Terraform. You can use the IDE of your choice; I will be using VS Code. Create the files within a folder. The files we will need at this point are:
  • main.tf: In this file, we will define the main resource, mongodbatlas_project. Here, you will configure the project name and organization ID, as well as other specific settings, such as teams, limits, and alert settings.
  • provider.tf: This file is where we define the provider we are using — in our case, mongodbatlas. Here, you will also include the access credentials, such as the API key.
  • terraform.tfvars: This file contains the variables that will be used in our project — for example, the project name, team information, and limits, among others.
  • variable.tf: Here, we define the variables mentioned in the terraform.tfvars file, specifying the type and, optionally, a default value.
  • version.tf: This file is used to specify the version of Terraform and the providers we are using.
The main.tf file is the heart of our Terraform project. In it, you start with the data source declaration mongodbatlas_roles_org_id to obtain the org_id, which is essential for creating the project. Next, you define the mongodbatlas_project resource with several settings. Here are some examples:
  • name and org_id are basic settings for the project name and organization ID.
  • Dynamic blocks are used to dynamically configure teams and limits, allowing flexibility and code reuse.
  • Other settings, like with_default_alerts_settings and is_data_explorer_enabled, are options for customizing the behavior of your MongoDB Atlas project.
In the main.tf file, we will then add our project resource, called mongodbatlas_project.
1data "mongodbatlas_roles_org_id" "org" {}
2
3resource "mongodbatlas_project" "default" {
4 name = var.name
5 org_id = data.mongodbatlas_roles_org_id.org.org_id
6
7 dynamic "teams" {
8 for_each = var.teams
9 content {
10 team_id = teams.value.team_id
11 role_names = teams.value.role_names
12 }
13 }
14
15 dynamic "limits" {
16 for_each = var.limits
17 content {
18 name = limits.value.name
19 value = limits.value.value
20 }
21 }
22
23 with_default_alerts_settings = var.with_default_alerts_settings
24 is_collect_database_specifics_statistics_enabled = var.is_collect_database_specifics_statistics_enabled
25 is_data_explorer_enabled = var.is_data_explorer_enabled
26 is_extended_storage_sizes_enabled = var.is_extended_storage_sizes_enabled
27 is_performance_advisor_enabled = var.is_performance_advisor_enabled
28 is_realtime_performance_panel_enabled = var.is_realtime_performance_panel_enabled
29 is_schema_advisor_enabled = var.is_schema_advisor_enabled
30}
In the provider file, we will define the provider we are using and the API key that will be used. As we are just testing, I will specify the API key as a variable that we will input into our code. However, when you are using it in production, you will not want to pass the API key in the code in exposed text, so it is possible to pass it through environment variables or even AWS secret manager.
1provider "mongodbatlas" {
2 public_key = var.atlas_public_key
3 private_key = var.atlas_private_key
4}
In the variable.tf file, we will specify the variables that we are waiting for a user to pass. As I mentioned earlier, the API key is an example.
1variable "name" {
2 description = <<HEREDOC
3 The name of the project you want to create.
4 HEREDOC
5 type = string
6}
7
8variable "teams" {
9 description = <<HEREDOC
10 The list of teams that belong to the project.
11 The roles can be:
12 Organization:
13 ORG_OWNER
14 ORG_MEMBER
15 ORG_GROUP_CREATOR
16 ORG_BILLING_ADMIN
17 ORG_READ_ONLY
18 Project:
19 GROUP_CLUSTER_MANAGER
20 GROUP_DATA_ACCESS_ADMIN
21 GROUP_DATA_ACCESS_READ_ONLY
22 GROUP_DATA_ACCESS_READ_WRITE
23 GROUP_OWNER
24 GROUP_READ_ONLY
25 HEREDOC
26 type = list(object({
27 team_id = string
28 role_names = list(string)
29 }))
30 default = []
31}
32
33variable "is_collect_database_specifics_statistics_enabled" {
34 description = <<HEREDOC
35 If true, Atlas collects and stores database-specific statistics for the specified project.
36 HEREDOC
37 type = bool
38 default = true
39}
40
41variable "is_data_explorer_enabled" {
42 description = <<HEREDOC
43 If true, Atlas enables Data Explorer for the specified project.
44 HEREDOC
45 type = bool
46 default = false
47}
48
49variable "is_extended_storage_sizes_enabled" {
50 description = <<HEREDOC
51 If true, Atlas enables extended storage sizes for the specified project.
52 HEREDOC
53 type = bool
54 default = true
55}
56
57variable "is_performance_advisor_enabled" {
58 description = <<HEREDOC
59 If true, Atlas enables Performance Advisor for the specified project.
60 HEREDOC
61 type = bool
62 default = true
63}
64
65
66variable "is_realtime_performance_panel_enabled" {
67 description = <<HEREDOC
68 If true, Atlas enables the Real Time Performance Panel for the specified project.
69 HEREDOC
70 type = bool
71 default = true
72}
73
74
75variable "is_schema_advisor_enabled" {
76 description = <<HEREDOC
77 If true, Atlas enables Schema Advisor for the specified project.
78 HEREDOC
79 type = bool
80 default = true
81}
82
83
84variable "with_default_alerts_settings" {
85 description = <<HEREDOC
86 If true, Atlas enables default alerts settings for the specified project.
87 HEREDOC
88 type = bool
89 default = true
90}
91
92
93variable "limits" {
94 description = <<HEREDOC
95 Allows one to configure a variety of limits to a Project. The limits attribute is optional.
96 https://mongodb.prakticum-team.ru/docs/atlas/reference/api-resources-spec/v2/#tag/Projects/operation/setProjectLimit
97 HEREDOC
98 type = list(object({
99 name = string
100 value = string
101 }))
102
103
104 default = []
105}
106
107
108variable "atlas_public_key" {
109 description = <<HEREDOC
110 The public key of the Atlas user you want to use to create the project.
111 HEREDOC
112 type = string
113}
114
115
116variable "atlas_private_key" {
117 description = <<HEREDOC
118 The private key of the Atlas user you want to use to create the project.
119 HEREDOC
120 type = string
121}
As you can see, we are placing several variables. The idea is that we can reuse the module to create different projects that have different specifications. For example, one needs the data from the databases to be visible on the Atlas Platform, so we put the is_data_explorer_enabled variable as True. On the other hand, if, for some security reason, the company does not want users to use the platform for data visualization, it is possible to specify it as False, and the Collections button that we have when we enter the cluster within Atlas will disappear.
In the versions.tf file, we specify the version of Terraform we use and the necessary providers with their respective versions. This configuration is crucial so that the code always runs in the same version, avoiding inconsistencies and incompatibilities that may arise with updates.
Here is what our providers file will look like:
1terraform {
2 required_version = ">= 0.12"
3 required_providers {
4 mongodbatlas = {
5 source = "mongodb/mongodbatlas"
6 version = "1.14.0"
7 }
8 }
9}
  • required_version = ">= 0.12": This line specifies that your Terraform project requires, at a minimum, Terraform version 0.12. By using >=, you indicate that any version of Terraform from 0.12 onward is compatible with your project. This offers some flexibility by allowing team members and automation systems to use newer versions of Terraform as long as they are not older than 0.12.
  • required_providers: This section lists the providers required for your Terraform project. In your case, you are specifying the mongodbatlas provider.
  • source = "mongodb/mongodbatlas": This defines the source of the mongodbatlas provider. Here, mongodb/mongodbatlas is the official identifier of the MongoDB Atlas provider in the Terraform Registry.
  • version = "1.14.0": This line specifies the exact version of the mongodbatlas provider that your project will use, which is version 1.14.0. Unlike Terraform configuration, where we specify a minimum version, here you are defining a provider-specific version. This ensures that everyone using your code will work with the same version of the provider, avoiding discrepancies and issues related to version differences.
Finally, we have the variable file that will be included in our code, .tfvars.
1name = "project-test"
2atlas_public_key = "YOUR PUBLIC KEY"
3atlas_private_key = "YOUR PRIVATE KEY"
We are specifying the value of the name variable, which is the name of the project and the public/private key of our provider. You may wonder, "Where are the other variables that we specified in the main.tf and variable.tf files?" The answer is: These variables were specified with a default value within the variable.tf file — for example, the limits value:
1variable "limits" {
2 description = <<HEREDOC
3 Allows one to configure a variety of limits to a Project. The limits attribute is optional.
4 https://mongodb.prakticum-team.ru/docs/atlas/reference/api-resources-spec/v2/#tag/Projects/operation/setProjectLimit
5 HEREDOC
6 type = list(object({
7 name = string
8 value = string
9 }))
10 default = []
11}
We are saying that if no information is passed in .tfvars, the default value is empty. This means that it will not create any limit rules for our project. If we want to specify a limit, we just put the following variable in .tfvars:
1name = "project-test"
2atlas_public_key = "YOUR PUBLIC KEY"
3atlas_private_key = "YOUR PRIVATE KEY"
4limits = [{
5 "name" : "atlas.project.deployment.clusters",
6 "value" : 26
7 }, {
8 "name" : "atlas.project.deployment.nodesPerPrivateLinkRegion",
9 "value" : 51
10}]
Now is the time to apply. =D
We run a terraform init in the terminal in the folder where the files are located so that it downloads the providers, modules, etc…
Now that init has worked, let's run the plan and evaluate what will happen. You can run the plan, like this terraform plan:
1samuelmolling@Samuels-MacBook-Pro project % terraform plan
2data.mongodbatlas_roles_org_id.org: Reading...
3data.mongodbatlas_roles_org_id.org: Read complete after 0s [id=5d7072c9014b769c4bd89f60]
4
5Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
6 + create
7
8Terraform will perform the following actions:
9
10 # mongodbatlas_project.default will be created
11 + resource "mongodbatlas_project" "default" {
12 + cluster_count = (known after apply)
13 + created = (known after apply)
14 + id = (known after apply)
15 + is_collect_database_specifics_statistics_enabled = true
16 + is_data_explorer_enabled = false
17 + is_extended_storage_sizes_enabled = true
18 + is_performance_advisor_enabled = true
19 + is_realtime_performance_panel_enabled = true
20 + is_schema_advisor_enabled = true
21 + name = "project-test"
22 + org_id = "5d7072c9014b769c4bd89f60"
23 + with_default_alerts_settings = true
24
25 + limits {
26 + current_usage = (known after apply)
27 + default_limit = (known after apply)
28 + maximum_limit = (known after apply)
29 + name = "atlas.project.deployment.clusters"
30 + value = 26
31 }
32 + limits {
33 + current_usage = (known after apply)
34 + default_limit = (known after apply)
35 + maximum_limit = (known after apply)
36 + name = "atlas.project.deployment.nodesPerPrivateLinkRegion"
37 + value = 51
38 }
39 }
40
41Plan: 1 to add, 0 to change, 0 to destroy.
42
43───────────────────────────────────────── ───────── ───────────────────────────────────────── ───────── ───────────────────────────────────────── ───────── ───────────────────────────────────────── ────
44
45Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run `terraform apply` now.
Show! It was exactly the output we expected to see: the creation of a project resource, with some limits and is_data_explorer_enabled set to False. Let's apply this!
When you run the command terraform apply, you will be asked for approval with yes or no. Type yes.
1samuelmolling@Samuels-MacBook-Pro project % terraform apply
2data.mongodbatlas_roles_org_id.org: Reading...
3data.mongodbatlas_roles_org_id.org: Read complete after 0s [id=5d7072c9014b769c4bd89f60]
4
5Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
6 + create
7
8Terraform will perform the following actions:
9
10 # mongodbatlas_project.default will be created
11 + resource "mongodbatlas_project" "default" {
12 + cluster_count = (known after apply)
13 + created = (known after apply)
14 + id = (known after apply)
15 + is_collect_database_specifics_statistics_enabled = true
16 + is_data_explorer_enabled = false
17 + is_extended_storage_sizes_enabled = true
18 + is_performance_advisor_enabled = true
19 + is_realtime_performance_panel_enabled = true
20 + is_schema_advisor_enabled = true
21 + name = "project-test"
22 + org_id = "5d7072c9014b769c4bd89f60"
23 + with_default_alerts_settings = true
24
25 + limits {
26 + current_usage = (known after apply)
27 + default_limit = (known after apply)
28 + maximum_limit = (known after apply)
29 + name = "atlas.project.deployment.clusters"
30 + value = 26
31 }
32 + limits {
33 + current_usage = (known after apply)
34 + default_limit = (known after apply)
35 + maximum_limit = (known after apply)
36 + name = "atlas.project.deployment.nodesPerPrivateLinkRegion"
37 + value = 51
38 }
39 }
40
41Plan: 1 to add, 0 to change, 0 to destroy.
42
43Do you want to perform these actions?
44 Terraform will perform the actions described above.
45 Only 'yes' will be accepted to approve.
46
47 Enter a value: yes
48
49mongodbatlas_project.default: Creating...
50mongodbatlas_project.default: Creation complete after 9s [id=659ed54eb3343935e840ce1f]
51
52Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Now, let's look at Atlas to see if the project was created successfully...
MongoDB Atlas view of our project
It worked!
In this tutorial, we saw how to create our first API key. We created a project using Terraform and our first module. In an upcoming article, we’ll look at how to create a cluster and user using Terraform and Atlas.
To learn more about MongoDB and various tools, I invite you to enter the Developer Center to read the other articles.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

Developing Your Applications More Efficiently with MongoDB Atlas Serverless Instances


Feb 03, 2023 | 7 min read
Tutorial

Integrate Atlas Application Services Logs Into Datadog on AWS


Sep 09, 2024 | 2 min read
Article

Using Atlas Data Federation to Control Access to Your Analytics Node


Aug 28, 2024 | 9 min read
Tutorial

How to Migrate PostgreSQL to MongoDB With Confluent Kafka


Aug 30, 2024 | 10 min read
Table of Contents
  • Creating a project