Docs Menu
Docs Home
/ /

Integrate MongoDB with LangGraph

You can integrate MongoDB with LangGraph to build AI agents and advanced RAG applications. This page provides an overview of the MongoDB LangGraph integration and how you can use MongoDB for agent state persistence, memory, and retrieval in your LangGraph workflows.

To build a sample AI agent that uses all of the components on this page, see the tutorial.

Note

For the JavaScript integration, see LangGraph JS/TS.

LangGraph is a specialized framework within the LangChain ecosystem designed for building AI agents and complex multi-agent workflows. Graphs are the core components of LangGraph, representing the workflow of your agent. The MongoDB LangGraph integration enables the following capabilities:

  • MongoDB LangGraph Checkpointer: You can persist the state of your LangGraph agents in MongoDB, providing short-term memory.

  • MongoDB LangGraph Store: You can store and retrieve important memories for your LangGraph agents in a MongoDB collection, providing long-term memory.

  • Retrieval Tools: You can use the MongoDB LangChain integration to quickly create retrieval tools for your LangGraph workflows.

Integrating your LangGraph applications with MongoDB allows you to consolidate both retrieval capabilities and agent memory in a single database, simplifying your architecture and reducing operational complexity.

The MongoDB LangGraph Checkpointer allows you to persist your agent's state in MongoDB to implement short-term memory. This feature enables human-in-the-loop, memory, time travel, and fault-tolerance for your LangGraph agents.

To install the package for this component:

pip install langgraph-checkpoint-mongodb
from langgraph.checkpoint.mongodb import MongoDBSaver
from pymongo import MongoClient
# Connect to your MongoDB cluster
client = MongoClient("<connection-string>")
# Initialize the MongoDB checkpointer
checkpointer = MongoDBSaver(client)
# Instantiate the graph with the checkpointer
app = graph.compile(checkpointer=checkpointer)

The MongoDB LangGraph Store allows you to store and retrieve memories in a MongoDB collection, which enables long-term memory for your LangGraph agents. This enables you to build agents that can remember past interactions and use that information to inform future decisions.

To install the package for this component:

pip install langgraph-store-mongodb

Atlas supports two embedding modes:

  • Manual embedding: Generate embedding vectors on the client's side with an embedding model you specify.

  • Automated embedding: MongoDB embeds text on the server's side without needing to generate them manually. To learn more, see Automated Embedding.

    Important

    Automated embedding is available as a Preview feature only for MongoDB Community Edition v8.2 and later. The feature and the corresponding documentation might change at any time during the Preview period. To learn more, see Preview Features.

from langgraph.store.mongodb import MongoDBStore, create_vector_index_config
from langchain_voyageai import VoyageAIEmbeddings
# Vector search index configuration with client-side embedding
index_config = create_vector_index_config(
embed = VoyageAIEmbeddings(),
dims = <dimensions>,
fields = ["<field-name>"],
filters = ["<filter-field-name>", ...] # Optional
)
# Store memories in MongoDB collection
with MongoDBStore.from_conn_string(
conn_string=MONGODB_URI,
db_name="<database-name>",
collection_name="<collection-name>",
index_config=index_config
) as store:
store.put(
namespace=("user", "memories"),
key=f"memory_{hash(content)}",
value={"content": content}
)

To use automated embedding, pass an AutoEmbeddings instance to the embed parameter in the index configuration. This enables MongoDB to generate and manage embedding vectors automatically.

from langgraph.store.mongodb import MongoDBStore, create_vector_index_config
from langchain_mongodb import AutoEmbeddings
# Vector search index configuration with server-side auto embedding
index_config = create_vector_index_config(
embed = AutoEmbeddings(model_name="voyage-4"),
fields = ["<field-name>"],
filters = ["<filter-field-name>", ...] # Optional
)
# Store memories - text is embedded server-side
with MongoDBStore.from_conn_string(
conn_string=MONGODB_URI,
db_name="<database-name>",
collection_name="<collection-name>",
index_config=index_config
) as store:
store.put(
namespace=("user", "memories"),
key=f"memory_{hash(content)}",
value={"content": content}
)

Retrieving and deleting memories:

# Retrieve memories from MongoDB collection
with MongoDBStore.from_conn_string(
conn_string=MONGODB_URI,
db_name="<database-name>",
collection_name="<collection-name>",
index_config=index_config
) as store:
results = store.search(
("user", "memories"),
query="<query-text>",
limit=3
)
for result in results:
print(result.value)
# To delete memories, use store.delete(namespace, key)
# To batch operations, use store.batch(ops)
Method
Description

put(namespace, key, value, *, index)

Stores a single item in the store with the specified namespace, key, and value.

search(namespace_prefix, /, *, ...)

Searches for items within a given namespace_prefix. Supports basic key-value filtering and semantic query search if a vector index is configured.

The search method has the following modes:

  • Metadata Filtering (No query): When called without a query argument, it performs a standard MongoDB filtered query. You can specify a filter dictionary to match against fields within the stored value document.

    For example: store.search(("docs",), filter={"author": "Bob", "status": "published"})

  • Semantic Search (With query): If the store was initialized with an index_config and a query string is provided, it performs a semantic search. The method embeds the query text and uses MongoDB Vector Search to find the most relevant items.

    For example: store.search(("docs",), query="information about AI assistants")

get(namespace, key, *, refresh_ttl)

Retrieves a single item from the store. Optionally, you can refresh the item's TTL upon access.

delete(namespace, key)

Deletes a single item from the store identified by its namespace and key.

list_namespaces(*, prefix, ...)

Lists unique namespaces in the store. Allows filtering by a path prefix, suffix, and document depth.

batch(ops)

Executes a sequence of operations (GetOp, PutOp, SearchOp, DeleteOp) in a single batch. Read operations are performed first, followed by a bulk application of deduplicated write operations. abatch(ops) is the async version of this method.

ensure_index_filters(filters)

Method that prepares a list of filter fields for MongoDB Vector Search indexing.

You can seamlessly use LangChain retrievers as tools in your LangGraph workflow to retrieve relevant data from MongoDB.

The MongoDB LangChain integration natively supports full-text search, vector search, hybrid search, and parent-document retrieval. For a complete list of retrieval methods, see MongoDB LangChain Retrievers.

  1. To create a basic retrieval tool with MongoDB Vector Search and LangChain:

    from langchain.tools.retriever import create_retriever_tool
    from langchain_mongodb.vectorstores import MongoDBAtlasVectorSearch
    from langchain_voyageai import VoyageAIEmbeddings
    # Instantiate the vector store
    vector_store = MongoDBAtlasVectorSearch.from_connection_string(
    connection_string = "<connection-string>", # MongoDB cluster URI
    namespace = "<database-name>.<collection-name>", # Database and collection name
    embedding = VoyageAIEmbeddings(), # Embedding model to use
    index_name = "vector_index", # Name of the vector search index
    # Other optional parameters...
    )
    # Create a retrieval tool
    retriever = vector_store.as_retriever()
    retriever_tool = create_retriever_tool(
    retriever,
    "vector_search_retriever", # Tool name
    "Retrieve relevant documents from the collection" # Tool description
    )
  2. To add the tool as a node in LangGraph:

    1. Convert the tool into a node.

    2. Add the node to the graph.

    from langgraph.graph import StateGraph
    from langgraph.prebuilt import ToolNode
    # Define the graph
    workflow = StateGraph()
    # Convert the retriever tool into a node
    retriever_node = ToolNode([retriever_tool])
    # Add the tool as a node in the graph
    workflow.add_node("vector_search_retriever", retriever_node)
    graph = workflow.compile()

Back

Spring AI

Earn a Skill Badge

Master "Gen AI" for free!

Learn more

On this page