Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
Java
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Languageschevron-right
Javachevron-right

Terraforming AI Workflows: RAG With MongoDB Atlas and Spring AI

Tim Kelly11 min read • Published Nov 18, 2024 • Updated Nov 18, 2024
TerraformAISpringJava
FULL APPLICATION
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
What’s the easiest way to handle infrastructure management without manually clicking around in a cloud provider’s UI? Better yet, how can you automate the entire process—databases, clusters, indexes—so you can just focus on building your app? Well, that’s exactly where HashiCorp Terraform comes in, and today we’re going to see it in action with MongoDB Atlas.
I know what you might be thinking. Infrastructure? Not the most exciting thing in the world. But hear me out. Terraform, in all its declarative glory, can make managing cloud resources feel less like a chore and more like laying down a blueprint for something awesome. By writing a few lines of code (because no amount of UX will ever make me enjoy UI for DevOps), you'll have your infrastructure up and running without breaking a sweat.
In this tutorial, we're setting up a MongoDB Atlas cluster, configuring vector search, and building a retrieval-augmented generation (RAG) app with Spring Boot and OpenAI embeddings. And the best part? We’ll use Terraform to automate the whole thing. MongoDB Atlas clusters, IP access control lists, backups—it’s all handled for us.
We’ll see how to programmatically deploy everything you need for a scalable, cloud-first setup. No cloud provider hopping. No manual mistakes. Just clean, automated infrastructure ready to support your RAG app. And whether you’re rolling on AWS, Azure, or Google Cloud, Terraform has you covered.
So, by the end of this tutorial, you won’t just have a slick RAG app. You’ll also have a solid understanding of how Terraform and MongoDB Atlas can work together to make your life easier. The whole project is available on GitHub. Let's jump into the code.

Prerequisites

To follow along with this tutorial, we need to ensure we have:
  • Java 21 or higher.
  • Maven or Gradle (for managing dependencies): We use Maven for this tutorial.
  • A MongoDB Atlas account, with a billing method added (for our paid tier clusters).
  • Terraform installed on your system, ideally using the latest Terraform Core major version which as of this writing is v1.9.

Infrastructure setup with Terraform

We will first configure the MongoDB Atlas cluster, database user, and vector search index using Terraform.

Initialize Terraform project

We create a new directory for our Terraform project and create two files: main.tf and variables.tf.

Define variables in

variables.tf
Now, let’s talk about the variables we’re going to use for MongoDB Atlas. Instead of hardcoding values directly into the Terraform config (which is an anti-pattern), we define all the important stuff as variables. That way, we can easily swap out values when moving between environments like dev, staging, and production, without touching the main Terraform code.
1variable "atlas_org_id" {
2 description = "MongoDB Atlas Organization ID"
3 type = string
4}
5
6variable "public_key" {
7 description = "Public API key for MongoDB Atlas"
8 type = string
9}
10
11variable "private_key" {
12 description = "Private API key for MongoDB Atlas"
13 type = string
14}
15
16variable "cluster_name" {
17 description = "Name of the MongoDB Atlas cluster"
18 type = string
19 default = "RagCluster"
20}
21
22variable "project_name" {
23 description = "Name of the MongoDB Atlas project"
24 type = string
25 default = "RAGProject"
26}
27
28variable "db_username" {
29 description = "MongoDB database username"
30 type = string
31}
32
33variable "db_password" {
34 description = "MongoDB database password"
35 type = string
36}
37
38variable "ip_address" {
39 description = "IP address to whitelist"
40 type = string
41}

Configure MongoDB Atlas cluster in

main.tf
Alright, so what we have here is a Terraform configuration that’s basically doing all the heavy lifting for setting up our MongoDB Atlas infrastructure. Instead of manually clicking around in MongoDB Atlas or dealing with UI (because who really enjoys doing that for DevOps?), this code automates the whole process. Let's break it down.

Terraform Block

First off, we declare the Terraform block:
1terraform {
2 required_providers {
3 mongodbatlas = {
4 source = "mongodb/mongodbatlas"
5 }
6 }
7 required_version = ">= 0.13"
8}
This is like telling Terraform, "Hey, I need the MongoDB Atlas Provider for this job," which is an official HashiCorp supported plugin that lets Terraform manage MongoDB Atlas. We're also making sure Terraform is on version 0.13 or higher.

MongoDB Atlas Provider configuration

Next, we configure the MongoDB Atlas provider itself:
1provider "mongodbatlas" {
2 public_key = var.public_key
3 private_key = var.private_key
4}
This is where we give Terraform the Programmatic API keys (which we will set up later) it needs to talk to MongoDB Atlas on our behalf. We’re using variables here (var.public_key and var.private_key), which means the actual keys are stored securely in environment variables or another secure method. No hard coding credentials here—security first, right?

MongoDB Atlas project setup

Now, we create a project in MongoDB Atlas:
1resource "mongodbatlas_project" "rag_project" {
2 name = var.project_name
3 org_id = var.atlas_org_id
4}
This block spins up a MongoDB Atlas project with a name and organization ID we’ve already defined in variables. Think of this as the folder where all our databases, clusters, and users live.

Cluster configuration

Now, onto the good stuff: building the actual cluster:
1resource "mongodbatlas_advanced_cluster" "rag_cluster" {
2 project_id = mongodbatlas_project.rag_project.id
3 name = var.cluster_name
4 cluster_type = "REPLICASET"
5
6 replication_specs {
7 region_configs {
8 electable_specs {
9 instance_size = "M10"
10 node_count = 3
11 }
12 provider_name = "AWS"
13 region_name = "EU_WEST_1"
14 priority = 7
15 }
16 }
17}
This is where we set up a replica set cluster on MongoDB Atlas (basically, high availability for our data). We specify the instance size (M10, which is a reasonably small instance) and spin up three nodes in the EU_WEST_1 region on AWS. We also give it a priority of 7, which affects the election process if a node fails. Long story short: This configuration is setting up a resilient, production-ready cluster.

IP whitelisting

Next, we need to make sure only trusted IPs can access our MongoDB cluster:
1resource "mongodbatlas_project_ip_access_list" "ip_list" {
2 project_id = mongodbatlas_project.rag_project.id
3 ip_address = var.ip_address
4}
This block whitelists an IP address so only devices from that address can talk to our cluster. Super important for security—only let in the good guys.

Database user setup

Finally, we create a database user:
1resource "mongodbatlas_database_user" "db_user" {
2 username = var.db_username
3 password = var.db_password
4 project_id = mongodbatlas_project.rag_project.id
5 auth_database_name = "admin"
6
7 roles {
8 role_name = "readWrite"
9 database_name = "rag"
10 }
11}
This block creates a new MongoDB database user with read and write access to the rag database. We’re using variables for the username and password, so those are kept secure and out of the main config. This user is created in the admin database but has permissions for the rag database, which is the one we’re working with.
So, in short: This Terraform config is setting up a fully automated MongoDB Atlas project with a cluster, access controls, and a user, all while keeping everything secure and ready for action.

Initialize and apply Terraform

To set up our terraform application, we need to add some environment variables.
1export TF_VAR_atlas_org_id="$MONGODB_ORG_ID"
2export TF_VAR_public_key="$MONGODB_PUBLIC_KEY"
3export TF_VAR_private_key="$MONGODB_PRIVATE_KEY"
4export TF_VAR_db_username="$MONGODB_USER"
5export TF_VAR_db_password="$MONGODB_PASSWORD"
6export TF_VAR_ip_address="$IP_ADDRESS"
Note: Terraform will take up the variables beginning with TF_VAR_
To get our Atlas organization ID, we need to go to the organization home in the Atlas UI, and select settings. Here, we can grab our organization ID.
Next, we need to create a public key and a private key. We need to go to the Access Manager at the top of the Atlas UI and select organization access. Select API keys and we'll create a key with the permissions of the Organization Owner and copy these keys to use as environment variables.
Now, we can add a username and password that will be created with read/write permissions on the cluster our project will create.
Once these are added, it's time to initialize and apply our Terraform configuration, thus creating our project.
  1. Initialize the Terraform project:
    1terraform init
  2. Apply the configuration:
    1terraform apply
This will provision a MongoDB Atlas project, cluster, and search index for the RAG app, as well as create a database user and IP access list.

Build the Spring Boot RAG application

Next, we will create a Spring Boot application that connects to MongoDB Atlas, generates embeddings using OpenAI, and performs retrieval-augmented generation (RAG).
You can clone this application from the GitHub repository Spring-AI-Rag, or follow the steps below.
This is going to be a quick go-over of how to create our RAG application. Everything you need will be in this tutorial! But, if want more of the nuance of what we are building here, such as what is RAG and Spring AI, I recommend reading my other tutorials Retrieval-Augmented Generation With MongoDB and Spring AI: Bringing AI to Your Java Applications and Building a Semantic Search Service With Spring AI and MongoDB Atlas. I repeat, all the code and configuration will be in this tutorial!

Initialize the Spring Boot project

Go to Spring Initializr to initialize the project: - Group: com.mongodb - Artifact: RagApp - Dependencies: - Spring Web - MongoDB Atlas Vector Database - OpenAI
Download the project and open it in your preferred IDE.

Add dependencies to

pom.xml
In our pom.xml file, ensure we have the following dependencies:
1<dependencies>
2 <dependency>
3 <groupId>org.springframework.boot</groupId>
4 <artifactId>spring-boot-starter-web</artifactId>
5 </dependency>
6
7 <dependency>
8 <groupId>org.springframework.data</groupId>
9 <artifactId>spring-data-mongodb</artifactId>
10 </dependency>
11
12 <dependency>
13 <groupId>org.springframework.ai</groupId>
14 <artifactId>spring-ai-openai</artifactId>
15 <version>1.0.0-SNAPSHOT</version>
16 </dependency>
17</dependencies>

Configure application properties

Open the application.properties file in the resource folder and configure the following properties for MongoDB and OpenAI:
1spring.application.name=RagApp
2
3# OpenAI API key
4spring.ai.openai.api-key=${OPENAI_API_KEY}
5spring.ai.openai.chat.options.model=gpt-4
6
7# MongoDB Atlas URI and Database
8spring.data.mongodb.uri=${MONGO_URI}
9spring.data.mongodb.database=rag
10
11spring.ai.vectorstore.mongodb.initialize-schema=false
We can retrieve your MongoDB connection URI from the Atlas UI for the cluster we just created.
These properties will connect our application to MongoDB Atlas and OpenAI using the environment variables.

Configure embedding model and vector store

Create a Config.java file to configure OpenAI and MongoDB Atlas integration for embeddings and vector search.
1import org.springframework.ai.embedding.EmbeddingModel;
2import org.springframework.ai.openai.OpenAiEmbeddingModel;
3import org.springframework.ai.openai.api.OpenAiApi;
4import org.springframework.ai.vectorstore.MongoDBAtlasVectorStore;
5import org.springframework.ai.vectorstore.VectorStore;
6import org.springframework.beans.factory.annotation.Value;
7import org.springframework.context.annotation.Bean;
8import org.springframework.context.annotation.Configuration;
9import org.springframework.data.mongodb.core.MongoTemplate;
10
11@Configuration
12public class Config {
13
14 @Value("${spring.ai.openai.api-key}")
15 private String openAiKey;
16
17 @Bean
18 public EmbeddingModel embeddingModel() {
19 return new OpenAiEmbeddingModel(new OpenAiApi(openAiKey));
20 }
21
22 @Bean
23 public VectorStore mongodbVectorStore(MongoTemplate mongoTemplate, EmbeddingModel embeddingModel) {
24 return new MongoDBAtlasVectorStore(mongoTemplate, embeddingModel,
25 MongoDBAtlasVectorStore.MongoDBVectorStoreConfig.builder().build(), true);
26 }
27}
This file sets up our OpenAI embedding model for generating our embeddings and our MongoDB Atlas vector store for storing and searching our documents.
Now, it's time for us to implement a document loading service and bring Atlas Vector Search to our application.

Create a service to load documents and generate embeddings

Create a DocsLoaderService.java service in a Service package to load documents from a dataset and store them in the MongoDB Atlas vector store.
1package com.mongodb.RagApp.service;
2
3import com.fasterxml.jackson.databind
4
5.ObjectMapper;
6import org.springframework.ai.document.Document;
7import org.springframework.ai.vectorstore.VectorStore;
8import org.springframework.beans.factory.annotation.Autowired;
9import org.springframework.core.io.ClassPathResource;
10import org.springframework.stereotype.Service;
11
12import java.io.BufferedReader;
13import java.io.InputStream;
14import java.io.InputStreamReader;
15import java.util.ArrayList;
16import java.util.List;
17import java.util.Map;
18
19@Service
20public class DocsLoaderService {
21 private static final int MAX_TOKENS_PER_CHUNK = 2000;
22 private final VectorStore vectorStore;
23 private final ObjectMapper objectMapper;
24
25 @Autowired
26 public DocsLoaderService(VectorStore vectorStore, ObjectMapper objectMapper) {
27 this.vectorStore = vectorStore;
28 this.objectMapper = objectMapper;
29 }
30
31 public String loadDocs() {
32 try (InputStream inputStream = new ClassPathResource("docs/devcenter-content-snapshot.json").getInputStream();
33 BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream))) {
34
35 List<Document> documents = new ArrayList<>();
36 String line;
37 while ((line = reader.readLine()) != null) {
38 Map<String, Object> jsonDoc = objectMapper.readValue(line, Map.class);
39 String content = (String) jsonDoc.get("body");
40
41 List<String> chunks = splitIntoChunks(content, MAX_TOKENS_PER_CHUNK);
42 for (String chunk : chunks) {
43 Document document = createDocument(jsonDoc, chunk);
44 documents.add(document);
45 }
46
47 if (documents.size() >= 100) {
48 vectorStore.add(documents);
49 documents.clear();
50 }
51 }
52
53 if (!documents.isEmpty()) {
54 vectorStore.add(documents);
55 }
56
57 return "All documents added successfully!";
58 } catch (Exception e) {
59 return "Error while adding documents: " + e.getMessage();
60 }
61 }
62
63 private Document createDocument(Map<String, Object> jsonMap, String content) {
64 Map<String, Object> metadata = (Map<String, Object>) jsonMap.get("metadata");
65 metadata.putIfAbsent("sourceName", jsonMap.get("sourceName"));
66 metadata.putIfAbsent("url", jsonMap.get("url"));
67
68 return new Document(content, metadata);
69 }
70
71 private List<String> splitIntoChunks(String content, int maxTokens) {
72 List<String> chunks = new ArrayList<>();
73 String[] words = content.split("\\s+");
74 StringBuilder chunk = new StringBuilder();
75 int tokenCount = 0;
76
77 for (String word : words) {
78 int wordTokens = word.length() / 4;
79 if (tokenCount + wordTokens > maxTokens) {
80 chunks.add(chunk.toString());
81 chunk.setLength(0);
82 tokenCount = 0;
83 }
84 chunk.append(word).append(" ");
85 tokenCount += wordTokens;
86 }
87
88 if (chunk.length() > 0) {
89 chunks.add(chunk.toString());
90 }
91
92 return chunks;
93 }
94}
This service will load our documents from a JSON file stored in a directory called docs, in our resources folder. We are using the MongoDB/devcenter-articles dataset on Hugging Face. This consists of articles and tutorials from the MongoDB Developer Center.
It will then chunk our larger documents into smaller pieces (to accommodate the OpenAI token limits) and store these documents, with their embeddings, in MongoDB Atlas.

Retrieving and augmenting responses

Finally, we’ll create our controller to handle our queries to both load the dataset into the database (and generate the embeddings), and to query the data. Unleash the full power of our RAG application by interpreting our questions and generating responses, all supplied by our own custom knowledge repository. This sounds dramatic, but it is quite cool!

Create a controller to handle RAG queries

Create a RagController.java file to accept queries from users, retrieve relevant documents using vector search, and pass them to OpenAI for augmentation.
1import org.springframework.ai.chat.client.ChatClient;
2import org.springframework.ai.chat.client.advisor.QuestionAnswerAdvisor;
3import org.springframework.ai.vectorstore.SearchRequest;
4import org.springframework.ai.vectorstore.VectorStore;
5import org.springframework.web.bind.annotation.CrossOrigin;
6import org.springframework.web.bind.annotation.GetMapping;
7import org.springframework.web.bind.annotation.RequestParam;
8import org.springframework.web.bind.annotation.RestController;
9
10@RestController
11public class RagController {
12 private final ChatClient chatClient;
13
14 public RagController(ChatClient.Builder builder, VectorStore vectorStore) {
15 this.chatClient = builder
16 .defaultAdvisors(new QuestionAnswerAdvisor(vectorStore, SearchRequest.defaults()))
17 .build();
18 }
19
20 @GetMapping("/question")
21 public String question(@RequestParam(value = "message", defaultValue = "What is RAG?") String message) {
22 return chatClient.prompt()
23 .user(message)
24 .call()
25 .content();
26 }
27}
We accept our user queries via a /question endpoint. Our app then uses vector search to find relevant documents from MongoDB, and sends said documents to OpenAI, to provide additional context for the responses, to generate an augmented response.

Load the data

Use the /api/docs/load endpoint to load documents into the MongoDB vector store.
1curl http://localhost:8080/api/docs/load

Create the search index

Here we have a choice when configuring the vector search index for our application. We can either build it programmatically via our application.properties file or define it as part of our Terraform infrastructure as code (IaC).

Option A: Programmatic configuration

If we want the index to be initialized as part of our application’s startup process, we simply set initialize.schema to true in our application.properties file:
1spring.ai.vectorstore.mongodb.initialize-schema=true
This approach is useful when we want rapid setup without managing external tools. It is appropriate for local development or smaller applications where the infrastructure isn’t that complex. Since everything is contained in the app, it’s straightforward and quick to modify.

Option B: Terraform configuration

However, if we want our index configuration to be part of our infrastructure management, we can use Terraform. At the bottom of our main.tf file, we'll add the code to configure and create the index as:
1resource "mongodbatlas_search_index" "vector_search" {
2 name = "search-index"
3 project_id = mongodbatlas_project.rag_project.id
4 cluster_name = mongodbatlas_advanced_cluster.rag_cluster.name
5 type = "vectorSearch"
6 database = "rag"
7 collection_name = "vector_store"
8 fields = <<-EOF
9 [{
10 "type": "vector",
11 "path": "embedding",
12 "numDimensions": 1536,
13 "similarity": "cosine"
14 }]
15 EOF
16}
By using Terraform, we’re taking advantage of a declarative approach. Our infrastructure changes are codified, versioned, and easily trackable! This provides strong consistency across environments, making it ideal for production use cases or larger systems where infrastructure is complex, and automated, reproducible deployment is crucial.
Let's apply our changes now:
1terraform init
2terraform apply
So what's our takeaway here?
  • API approach: Oftentimes quicker to set up and modify, but lacks the consistency and version control benefits that come with infrastructure automation
  • Terraform: Adds a layer of reliability, especially for production, where consistency and automation are vital
The right approach depends on our use case. For small projects or fast iterations, the deploying infrastructure directly via APIs might fit like a glove. For larger, production-grade applications, Terraform is often the preferred path.

Ask a question

Now that we have documents loaded, fields embedded, indexes created, what's left? Well, let's learn a little about MongoDB.
Use the /question endpoint to retrieve documents and generate augmented responses. Here, we'll ask:
1curl http://localhost:8080/question?message=How%20to%20analyze%20time-series%20data%20with%20Python%20and%20MongoDB?

Conclusion

This tutorial walks you through building a Spring Boot RAG application using MongoDB Atlas, OpenAI, and Terraform to manage infrastructure. The app allows users to ask questions, retrieves relevant documents using vector search, and generates context-aware responses using OpenAI, all while using Terraform for the benefits of infrastructure as code.
If you found this tutorial useful, check out our MongoDB Developer Center, where you can learn more about what you can do with Terraform and MongoDB, and learn how to do stuff like get started with MongoDB Atlas stream processing and the HashiCorp Terraform MongoDB Atlas Provider. Or head over to the MongoDB community forums to ask questions, and see what other people are building with MongoDB.
The HashiCorp Terraform Atlas Provider is open-sourced under the Mozilla Public License v2.0 and we welcome community contributions. To learn more, see our contributing guidelines.
The fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace, Google Cloud Marketplace, or Azure Marketplace. To learn more about the Terraform provider, check out the documentation, solution brief, and tutorials, or get started today.
Go build with MongoDB Atlas and the HashiCorp Terraform today!
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

Implementing Bulk Writes using Spring Boot for MongoDB


Mar 22, 2023 | 3 min read
Article

How Queryable Encryption Can Keep James Bond Safe


Apr 02, 2024 | 2 min read
Quickstart

Java - Change Streams


Oct 01, 2024 | 10 min read
Article

Java 21: Unlocking the Power of the MongoDB Java Driver With Virtual Threads


Jan 31, 2024 | 2 min read
Table of Contents