Join us at MongoDB.local London on 7 May to unlock new possibilities for your data. Use WEB50 to save 50%.
Register now >
Docs Menu
Docs Home
/ /

Get Started with the LangChain JS/TS Integration

Note

This tutorial uses LangChain's JavaScript library. For a tutorial that uses the Python library, see LangChain Python.

You can integrate MongoDB Vector Search with LangChain to build LLM applications and implement retrieval-augmented generation (RAG). This tutorial demonstrates how to start using MongoDB Vector Search with LangChain to perform semantic search on your data and build a RAG implementation. Specifically, you perform the following actions:

  1. Set up the environment.

  2. Store custom data in MongoDB.

  3. Create a MongoDB Vector Search index on your data.

  4. Run the following vector search queries:

    • Semantic search.

    • Semantic search with metadata pre-filtering.

    • Maximal Marginal Relevance (MMR) search.

  5. Implement RAG by using MongoDB Vector Search to answer questions on your data.

LangChain is an open-source framework that simplifies the creation of LLM applications through the use of "chains." Chains are LangChain-specific components that can be combined for a variety of AI use cases, including RAG.

By integrating MongoDB Vector Search with LangChain, you can use MongoDB as a vector database and use MongoDB Vector Search to implement RAG by retrieving semantically similar documents from your data. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with MongoDB.

To complete this tutorial, you must have the following:

  • One of the following MongoDB cluster types:

  • A Voyage AI API key. To create an API key, see Model API Keys.

    Note

    Your API requests might fail if you don't have a payment method configured on Atlas (for API key created in the Atlas UI) or Voyage AI (for API key created directly from Voyage AI).

  • An OpenAI API Key. You must have an OpenAI account with credits available for API requests. To learn more about registering an OpenAI account, see the OpenAI API website.

  • A terminal and code editor to run your Node.js project.

  • npm and Node.js installed.

Set up the environment for this tutorial. To set up your environment, complete the following steps.

1

Run the following commands in your terminal to create a new directory named langchain-mongodb and initialize your project:

mkdir langchain-mongodb
cd langchain-mongodb
npm init -y
2

Run the following command:

npm install langchain@latest @langchain/community@latest @langchain/core@latest @langchain/mongodb@latest @langchain/openai@latest @langchain/textsplitters@latest pdf-parse@1 --legacy-peer-deps
3

Configure your project to use ES modules by adding "type": "module" to your package.json file and then saving it.

{
"type": "module",
// other fields...
}
4

In your project, create a file named get-started.js, and then copy and paste the following code into the file. You will add code to this file throughout the tutorial.

This initial code snippet imports required packages for this tutorial, defines environment variables, and establishes a connection to your MongoDB cluster.

import { MongoClient } from "mongodb";
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";
import { ChatOpenAI } from "@langchain/openai";
import { VoyageEmbeddings } from "@langchain/community/embeddings/voyage";
import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
import { PromptTemplate } from "@langchain/core/prompts";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
import { RunnableSequence, RunnablePassthrough } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import * as fs from 'fs';
process.env.VOYAGEAI_API_KEY = "<api-key>"
process.env.OPENAI_API_KEY = "<api-key>";
process.env.MONGODB_URI = "<connection-string>";
const client = new MongoClient(process.env.MONGODB_URI);
const formatDocumentsAsString = (docs) => docs.map(d => d.pageContent).join("\n\n");
5

To finish setting up the environment, replace the <api-key> and <connection-string> placeholder values in get-started.js with your Voyage AI API Key, your OpenAI API Key and the SRV connection string for your MongoDB cluster respectively. Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

In this section, you define an asynchronous function to load custom data into MongoDB and instantiate your MongoDB cluster as a vector database, also called a vector store. Add the following code into your get-started.js file.

Note

For this tutorial, you use a publicly accessible PDF document titled MongoDB Atlas Best Practices as the data source for your vector store. This document describes various recommendations and core concepts for managing your MongoDB deployments.

This code performs the following actions:

  • Configures your MongoDB collection by specifying the following parameters:

    • langchain_db.test as the MongoDB collection to store the documents.

    • vector_index as the index to use for querying the vector store.

    • text as the name of the field containing the raw text content.

    • embedding as the name of the field containing the vector embeddings.

  • Prepares your custom data by doing the following:

    • Retrieves raw data from the specified URL and saves it as PDF.

    • Uses a text splitter to split the data into smaller documents.

    • Specifies chunk parameters, which determines the number of characters in each document and the number of characters that should overlap between two consecutive documents.

  • Creates a vector store from the sample documents by calling the MongoDBAtlasVectorSearch.fromDocuments method. This method specifies the following parameters:

    • The sample documents to store in the vector database.

    • Voyage AI's embedding model as the model used to convert text into vector embeddings for the embedding field.

    • Your MongoDB cluster configuration.

async function run() {
try {
// Configure your MongoDB collection
const database = client.db("langchain_db");
const collection = database.collection("test");
const dbConfig = {
collection: collection,
indexName: "vector_index", // The name of the MongoDB Search index to use.
textKey: "text", // Field name for the raw text content. Defaults to "text".
embeddingKey: "embedding", // Field name for the vector embeddings. Defaults to "embedding".
};
// Ensure that the collection is empty
const count = await collection.countDocuments();
if (count > 0) {
await collection.deleteMany({});
}
// Save online PDF as a file
const rawData = await fetch("https://webassets.mongodb.com/MongoDB_Best_Practices_Guide.pdf");
const pdfBuffer = await rawData.arrayBuffer();
const pdfData = Buffer.from(pdfBuffer);
fs.writeFileSync("atlas_best_practices.pdf", pdfData);
// Load and split the sample data
const loader = new PDFLoader(`atlas_best_practices.pdf`);
const data = await loader.load();
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 200,
chunkOverlap: 20,
});
const docs = await textSplitter.splitDocuments(data);
// Instantiate MongoDB as a vector store
const embeddingModel = new VoyageEmbeddings({ modelName: "voyage-4" });
embeddingModel.apiUrl = 'https://ai.mongodb.com/v1/embeddings';
const vectorStore = await MongoDBAtlasVectorSearch.fromDocuments(docs, embeddingModel, dbConfig);
} finally {
// Ensure that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);

Save the file, then run the following command to load your data into MongoDB.

node get-started.js

Tip

After running get-started.js, if you're using Atlas, you can verify your vector embeddings by navigating to the langchain_db.test namespace in the Atlas UI.

To enable vector search queries on your vector store, create a MongoDB Vector Search index on the langchain_db.test collection.

1
  1. Add the following code at the end of the try statement of the asynchronous function that you defined in your get-started.js file. This code creates an index of the vectorSearch type to index the following fields:

    • embedding field as the vector type. The embedding field contains the embeddings created using Voyage AI's voyage-4 embedding model. The index definition specifies 1024 vector dimensions and measures similarity using cosine.

    • loc.pageNumber field as the filter type for pre-filtering data by the page number in the PDF.

    This code also uses an await function to ensure that your search index has synced to your data before it's used.

    1// Ensure index does not already exist, then create your MongoDB Vector Search index
    2const indexes = await collection.listSearchIndexes("vector_index").toArray();
    3if(indexes.length === 0){
    4
    5 // Define your MongoDB Vector Search Index
    6 const index = {
    7 name: "vector_index",
    8 type: "vectorSearch",
    9 definition: {
    10 "fields": [
    11 {
    12 "type": "vector",
    13 "numDimensions": 1024,
    14 "path": "embedding",
    15 "similarity": "cosine"
    16 },
    17 {
    18 "type": "filter",
    19 "path": "loc.pageNumber"
    20 }
    21 ]
    22 }
    23 }
    24
    25 // Run the helper method
    26 const result = await collection.createSearchIndex(index);
    27 console.log(result);
    28}
    29
    30// Wait for index to build and become queryable
    31console.log("Waiting for initial sync...");
    32await new Promise(resolve => setTimeout(() => {
    33 resolve();
    34}, 3000));
  2. Save the file.

2
node get-started.js

This section demonstrates various queries that you can run on your vectorized data. Now that you've created the index, add the following code to your asynchronous function to run vector search queries against your data.

Note

If you experience inaccurate results when querying your data, your index might be taking longer than expected to sync. Increase the number in the setTimeout function to allow more time for the initial sync.

1

The following code uses the similaritySearch method to perform a basic semantic search for the string MongoDB Atlas security. It returns a list of documents ranked by relevance with only the pageContent and pageNumber fields.

// Basic semantic search
const basicOutput = await vectorStore.similaritySearch(
"MongoDB Atlas security"
);
const basicResults = basicOutput.map((results => ({
pageContent: results.pageContent,
pageNumber: results.metadata.loc.pageNumber,
})))
console.log("Semantic Search Results:")
console.log(basicResults)
if (basicResults.length === 0) {
console.log("No results found after waiting for index sync. Check Atlas Search index status and embedding configuration.");
}
2
node get-started.js
Semantic Search Results:
[
{
pageContent: 'read isolation. \n' +
'With MongoDB Atlas, you can achieve workload isolation with dedicated analytics nodes. Visualization \n' +
'tools like Atlas Charts can be configured to read from analytics nodes only.',
pageNumber: 21
},
{
pageContent: 'well-tuned queries.\n' +
'Built-in slow query profiling is also available if you’re deploying MongoDB with Atlas.',
pageNumber: 16
},
{
pageContent: 'Atlas free tier, or download MongoDB for local \n' +
'development.\n' +
'Review the MongoDB manuals and tutorials in our \n' +
'documentation. \n' +
'More Resources\n' +
'For more on getting started in MongoDB:',
pageNumber: 30
},
{
pageContent: 'If you are running MongoDB on your own infrastructure, you can configure replica set tags to achieve \n' +
'read isolation.',
pageNumber: 21
}
]

You can pre-filter your data by using an MQL match expression that compares the indexed field with another value in your collection. You must index any metadata fields that you want to filter by as the filter type. To learn more, see How to Index Fields for Vector Search.

Note

You specified the loc.pageNumber field as a filter when you created the index for this tutorial.

1

The following code uses the similaritySearch method to perform a semantic search for the string MongoDB Atlas security. It specifies the following parameters:

  • The number of documents to return as 3.

  • A pre-filter on the loc.pageNumber field that uses the $eq operator to match documents appearing on page 17 only.

It returns a list of documents ranked by relevance with only the pageContent and pageNumber fields.

// Semantic search with metadata filter
const filteredOutput = await vectorStore.similaritySearch("MongoDB Atlas Search", 3, {
preFilter: {
"loc.pageNumber": {"$eq": 22 },
}
});
const filteredResults = filteredOutput.map((results => ({
pageContent: results.pageContent,
pageNumber: results.metadata.loc.pageNumber,
})))
console.log("Semantic Search with Filtering Results:")
console.log(filteredResults)
2
node get-started.js
Semantic Search with Filtering Results:
[
{
pageContent: 'Atlas Search is built for the MongoDB document data model and provides higher performance and',
pageNumber: 22
},
{
pageContent: 'Figure 9: Atlas Search queries are expressed through the MongoDB Query API and backed by the leading search engine library, \n' +
'Apache Lucene.',
pageNumber: 22
},
{
pageContent: 'consider using Atlas Search. The service is built on fully managed Apache Lucene but exposed to users \n' +
'through the MongoDB Aggregation Framework.',
pageNumber: 22
}
]

You can also perform semantic search based on Max Marginal Relevance (MMR), a measure of semantic relevance optimized for diversity.

1

The following code uses the maxMarginalRelevanceSearch method to search for the string MongoDB Atlas security. It also specifies an object that defines the following optional parameters:

  • k to limit the number of returned documents to 3.

  • fetchK to fetch only 10 documents before passing the documents to the MMR algorithm.

It returns a list of documents ranked by relevance with only the pageContent and pageNumber fields.

// Max Marginal Relevance search
const mmrOutput = await vectorStore.maxMarginalRelevanceSearch("MongoDB Atlas security", {
k: 3,
fetchK: 10,
});
const mmrResults = mmrOutput.map((results => ({
pageContent: results.pageContent,
pageNumber: results.metadata.loc.pageNumber,
})))
console.log("Max Marginal Relevance Search Results:")
console.log(mmrResults)
2
node get-started.js
Max Marginal Relevance Search Results:
[
{
pageContent: 'Atlas Search is built for the MongoDB document data model and provides higher performance and',
pageNumber: 22
},
{
pageContent: '• Zoned Sharding — You can define specific rules governing data placement in a sharded cluster.\n' +
'Global Clusters in MongoDB Atlas allows you to quickly implement zoned sharding using a visual UI or',
pageNumber: 27
},
{
pageContent: 'read isolation. \n' +
'With MongoDB Atlas, you can achieve workload isolation with dedicated analytics nodes. Visualization \n' +
'tools like Atlas Charts can be configured to read from analytics nodes only.',
pageNumber: 21
}
]

Tip

For more information, refer to the API reference.

This section demonstrates two different RAG implementations using MongoDB Vector Search and LangChain. Now that you've used MongoDB Vector Search to retrieve semantically similar documents, use the following code examples to prompt the LLM to answer questions against the documents returned by MongoDB Vector Search.

1

This code does the following:

  • Instantiates MongoDB Vector Search as a retriever to query for semantically similar documents.

  • Defines a LangChain prompt template to instruct the LLM to use these documents as context for your query. LangChain passes these documents to the {context} input variable and your query to the {question} variable.

  • Constructs a chain that uses OpenAI's chat model to generate context-aware responses based on your prompt.

  • Prompts the chain with a sample query about Atlas security recommendations.

  • Returns the LLM's response and the documents used as context.

// Implement RAG to answer questions on your data
const retriever = vectorStore.asRetriever();
const prompt =
PromptTemplate.fromTemplate(`Answer the question based on the following context:
{context}
Question: {question}`);
const model = new ChatOpenAI({ modelName: "gpt-5-mini" }) // Pick your preferred model. Ensure to enable it in your OpenAI settings dashboard.
const chain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
model,
new StringOutputParser(),
]);
// Prompt the LLM
const question = "How can I secure my MongoDB Atlas cluster?";
const answer = await chain.invoke(question);
console.log("Question: " + question);
console.log("Answer: " + answer);
// Return source documents
const retrievedResults = await retriever.invoke(question)
const documents = retrievedResults.map((documents => ({
pageContent: documents.pageContent,
pageNumber: documents.metadata.loc.pageNumber,
})))
console.log("\nSource documents:\n" + JSON.stringify(documents, null, 2))
2

After you save the file, run the following command. The generated response might vary.

node get-started.js
Question: How can I secure my MongoDB Atlas cluster?
Answer: You can secure your MongoDB Atlas cluster by achieving workload isolation with dedicated analytics nodes, configuring visualization tools like Atlas Charts to read from analytics nodes only, and using built-in slow query profiling if deploying with Atlas. Additionally, you can distribute replica set members across multiple data centers for added security during election and failover.
Source documents:
[
{
"pageContent": "read isolation. \nWith MongoDB Atlas, you can achieve workload isolation with dedicated analytics nodes. Visualization \ntools like Atlas Charts can be configured to read from analytics nodes only.",
"pageNumber": 21
},
{
"pageContent": "If you are running MongoDB on your own infrastructure, you can configure replica set tags to achieve \nread isolation.",
"pageNumber": 21
},
{
"pageContent": "well-tuned queries.\nBuilt-in slow query profiling is also available if you’re deploying MongoDB with Atlas.",
"pageNumber": 16
},
{
"pageContent": "achieved during election and failover. \nIf possible, distribute replica set members across multiple data centers. If you’re using MongoDB Atlas,",
"pageNumber": 24
}
]
1

This code does the following:

  • Instantiates MongoDB Vector Search as a retriever to query for semantically similar documents. It also specifies the following optional parameters:

    • searchType as mmr, which specifies that MongoDB Vector Search retrieves documents based on Max Marginal Relevance (MMR).

    • filter to add a pre-filter on the loc.pageNumber field to include documents that appear on page 17 only.

    • The following MMR-specific parameters:

      • fetchK to fetch only 20 documents before passing the documents to the MMR algorithm.

      • lambda, a value between 0 and 1 to determine the degree of diversity among the results, with 0 representing maximum diversity and 1 representing minimum diversity.

  • Defines a LangChain prompt template to instruct the LLM to use these documents as context for your query. LangChain passes these documents to the {context} input variable and your query to the {question} variable.

  • Constructs a chain that uses OpenAI's chat model to generate context-aware responses based on your prompt.

  • Prompts the chain with a sample query about Atlas security recommendations.

  • Returns the LLM's response and the documents used as context.

// Implement RAG to answer questions on your data
const retriever = await vectorStore.asRetriever({
searchType: "mmr", // Defaults to "similarity"
filter: { preFilter: { "loc.pageNumber": { "$eq": 17 } } },
searchKwargs: {
fetchK: 20,
lambda: 0.1,
},
});
const prompt =
PromptTemplate.fromTemplate(`Answer the question based on the following context:
{context}
Question: {question}`);
const model = new ChatOpenAI({});
const chain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
model,
new StringOutputParser(),
]);
// Prompt the LLM
const question = "How can I secure my MongoDB Atlas cluster?";
const answer = await chain.invoke(question);
console.log("Question: " + question);
console.log("Answer: " + answer);
// Return source documents
const retrievedResults = await retriever.invoke(question)
const documents = retrievedResults.map((documents => ({
pageContent: documents.pageContent,
pageNumber: documents.metadata.loc.pageNumber,
})))
console.log("\nSource documents:\n" + JSON.stringify(documents, null, 2))
2

After you save the file, run the following command. The generated response might vary.

node get-started.js
Question: How can I secure my MongoDB Atlas cluster?
Answer: One way to secure your MongoDB Atlas cluster is by implementing proper access controls and ensuring that only authorized users have access to your data. You can also enable encryption at rest and in transit, use network security features such as VPC peering, and regularly update and patch your MongoDB database to protect against security vulnerabilities. Additionally, implementing auditing and monitoring tools can help you detect and respond to any security incidents in a timely manner.
Source documents:
[
{
"pageContent": "Optimizing Data \nAccess Patterns\nNative tools in MongoDB for improving query \nperformance and reducing overhead.",
"pageNumber": 17
}
]

To learn how to integrate MongoDB Vector Search with LangGraph, see Integrate MongoDB with LangGraph.js.

MongoDB also provides the following developer resources:

Back

Evaluate RAG Applications

On this page