Docs Menu
Docs Home
/
MongoDB Atlas
/ /

Get Started with the Amazon Bedrock Knowledge Base Integration

On this page

  • Background
  • Prerequisites
  • Load Custom Data
  • Configure an Endpoint Service
  • Create the Atlas Vector Search Index
  • Create a Knowledge Base
  • Create an Agent
  • Next Steps

Note

Atlas Vector Search is currently available as a knowledge base only in AWS regions located in the United States.

You can use Atlas Vector Search as a knowledge base for Amazon Bedrock to build generative AI applications and implement retrieval-augmented generation (RAG). This tutorial demonstrates how to start using Atlas Vector Search with Amazon Bedrock. Specifically, you perform the following actions:

  1. Load custom data into an Amazon S3 bucket.

  2. Optionally, configure an endpoint service using AWS PrivateLink.

  3. Create an Atlas Vector Search index on your data.

  4. Create a knowledge base to store data on Atlas.

  5. Create an agent that uses Atlas Vector Search to implement RAG.

Amazon Bedrock is a fully-managed service for building generative AI applications. It allows you to leverage foundation models (FMs) from various AI companies as a single API.

You can use Atlas Vector Search as a knowledge base for Amazon Bedrock to store custom data in Atlas and create an agent to implement RAG and answer questions on your data. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with Atlas Vector Search.

To complete this tutorial, you must have the following:

If you don't already have an Amazon S3 bucket that contains text data, create a new bucket and load the following publicly accessible PDF about MongoDB best practices:

1
  1. Navigate to the Best Practices Guide for MongoDB.

  2. Click either Read Whitepaper or Email me the PDF to access the PDF.

  3. Download and save the PDF locally.

2
  1. Follow the steps to create an S3 Bucket. Ensure that you use a descriptive Bucket Name.

  2. Follow the steps to upload a file to your Bucket. Select the file that contains the PDF that you just downloaded.

By default, Amazon Bedrock connects to your knowledge base over the public internet. To further secure your connection, Atlas Vector Search supports connecting to your knowledge base over a virtual network through an AWS PrivateLink endpoint service.

Optionally, complete the following steps to enable an endpoint service that connects to an AWS PrivateLink private endpoint for your Atlas cluster:

1

Follow the steps to set up a AWS PrivateLink private endpoint for your Atlas cluster. Ensure that you use a descriptive VPC ID to identify your private endpoint.

For more information, see Learn About Private Endpoints in Atlas.

2

MongoDB and partners provide a Cloud Development Kit (CDK) that you can use to configure an endpoint service backed by a network load balancer that forwards traffic to your private endpoint.

Follow the steps specified in the CDK GitHub Repository to prepare and run the CDK script.

In this section, you set up Atlas as a vector database, also called a vector store, by creating an Atlas Vector Search index on your collection.

To create an Atlas Vector Search index, you must have Project Data Access Admin or higher access to the Atlas project.

1
  1. If it's not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.

  2. If it's not already displayed, select your desired project from the Projects menu in the navigation bar.

  3. If it's not already displayed, click Clusters in the sidebar.

    The Clusters page displays.

2

Click the Browse Collections button for your cluster.

The Data Explorer displays.

3
  1. Click the + Create Database button.

  2. For the Database name enter bedrock_db.

  3. For the Collection name, enter test.

  4. Click Create to create the database and its first collection.

4

You can go the Atlas Search page from the sidebar, the Data Explorer, or your cluster details page.

  1. In the sidebar, click Atlas Search under the Services heading.

  2. From the Select data source dropdown, select your cluster and click Go to Atlas Search.

    The Atlas Search page displays.

  1. Click the Browse Collections button for your cluster.

  2. Expand the database and select the collection.

  3. Click the Search Indexes tab for the collection.

    The Atlas Search page displays.

  1. Click the cluster's name.

  2. Click the Atlas Search tab.

    The Atlas Search page displays.

5
  1. Click the Create Search Index button.

  2. Under Atlas Vector Search, select JSON Editor and then click Next.

  3. In the Database and Collection section, find the bedrock_db database and select the test collection.

  4. In the Index Name field, enter vector_index.

  5. Replace the default definition with the following sample index definition and then click Next.

    This index definition specifies indexing the following fields in an index of the vectorSearch type:

    • embedding field as the vector type. The embedding field contains the vector embeddings created using the embedding model that you specify when you configure the knowledge base. The index definition specifies 1024 vector dimensions and measures similarity using cosine.

    • The metadata and text_chunk fields as filter types for pre-filtering your data. You specify these fields when you configure the knowledge base.

    1{
    2 "fields": [
    3 {
    4 "numDimensions": 1024,
    5 "path": "embedding",
    6 "similarity": "cosine",
    7 "type": "vector"
    8 },
    9 {
    10 "path": "metadata",
    11 "type": "filter"
    12 },
    13 {
    14 "path": "text_chunk",
    15 "type": "filter"
    16 }
    17 ]
    18}
6

A modal window displays to let you know that your index is building.

7

The index should take about one minute to build. While it builds, the Status column reads Initial Sync. When it finishes building, the Status column reads Active.

In this section, you create a knowledge base to load custom data into your vector store.

1
  1. Log in to the AWS Console.

  2. In the upper-left corner, click the Services dropdown menu.

  3. Click Machine Learning, and then select Amazon Bedrock.

  4. On the Amazon Bedrock landing page, click Get started.

2

Amazon Bedrock doesn't grant access to FMs automatically. If you haven't already, follow the steps to add model access for the Titan Embeddings G1 - Text and Anthropic Claude V2.1 models.

3
  1. In the left navigation of the Amazon Bedrock console, click Knowledge bases.

  2. Click Create knowledge base.

  3. Specify mongodb-atlas-knowledge-base as the Knowledge base name.

  4. Click Next.

By default, Amazon Bedrock creates a new IAM role to access the knowledge base.

4
  1. Specify a name for the data source used by the knowledge base.

  2. Enter the URI for the S3 bucket that contains your data source. Or, click Browse S3 and find the S3 bucket that contains your data source from the list.

  3. Click Next.

    Amazon Bedrock displays available embeddings models that you can use to convert your data source's text data into vector embeddings.

  4. Select Titan Embeddings G1 - Text.

5
  1. In the Vector database section, select Choose a vector store you have created.

  2. Select MongoDB Atlas and configure the following options:

    • For the Hostname, enter the URL for your Atlas cluster located in its connection string. The hostname uses the following format:

      <clusterName>.mongodb.net
    • For the Database name, enter bedrock_db.

    • For the Collection name, enter test.

    • For the Credentials secret ARN, enter the ARN for the secret that contains your Atlas cluster credentials. To learn more, see AWS Secrets Manager concepts.

  3. In the Metadata field mapping section, configure the following options to determine the search index and field names that Atlas uses to embed and store your data source:

    • For the Vector search index name, enter vector_index.

    • For the Vector embedding field path, enter embedding.

    • For the Text field path, enter text_chunk.

    • For the Metadata field path, enter metadata.

  4. If you configured an endpoint service, enter your PrivateLink Service Name.

  5. Click Next.

6

After reviewing the details for your knowledge base, click Create knowledge base to finalize your creation.

7

After Amazon Bedrock creates the knowledge base, it prompts you to sync your data. In the Data source section, select your data source and click Sync to sync the data from the S3 bucket and load it into Atlas.

When the sync completes, you can view your vector embeddings in the Atlas UI by navigating to the bedrock_db.test collection in your cluster.

In this section, you create an agent that uses Atlas Vector Search to implement RAG and answer questions on your data. When you prompt this agent, it does the following:

  1. Connects to your knowledge base to access the custom data stored in Atlas.

  2. Uses Atlas Vector Search to retrieve relevant documents from your vector store based on the prompt.

  3. Leverages an AI chat model to generate a context-aware response based on these documents.

Complete the following steps to create and test the RAG agent:

1
  1. In the left navigation for Amazon Bedrock, click Agents.

  2. Click Create Agent.

  3. Specify mongodb-rag-agent as the Name and click Create.

2

By default, Amazon Bedrock creates a new IAM role to access the agent. In the Agent details section, specify the following:

  1. From the dropdown menus, select Anthropic and Claude V2.1 as the provider and AI model used to answer questions on your data.

    Note

    Amazon Bedrock doesn't grant access to FMs automatically. If you haven't already, follow the steps to add model access for the Anthropic Claude V2.1 model.

  2. Provide instructions for the agent so that it knows how to complete the task.

    For example, if you're using the sample data, paste the following instructions:

    You are a friendly AI chatbot that answers questions about working with MongoDB.
  3. Click Save.

3

To connect the agent to the knowledge base that you created:

  1. In the Knowledge Bases section, click Add.

  2. Select mongodb-atlas-knowledge-base from the dropdown.

  3. Describe the knowledge base to determine how the agent should interact with the data source.

    If you're using the sample data, paste the following instructions:

    This knowledge base describes best practices when working with MongoDB.
  4. Click Add, and then click Save.

4
  1. Click the Prepare button.

  2. Click Test. Amazon Bedrock displays a testing window to the right of your agent details if it's not already displayed.

  3. In the testing window, enter a prompt. The agent prompts the model, uses Atlas Vector Search to retrieve relevant documents, and then generates a response based on the documents.

    If you used the sample data, enter the following prompt. The generated response might vary.

    What's the best practice to reduce network utilization with MongoDB?
    The best practice to reduce network utilization with MongoDB is
    to issue updates only on fields that have changed rather than
    retrieving the entire documents in your application, updating
    fields, and then saving the document back to the database. [1]

    Tip

    Click the annotation in the agent's response to view the text chunk that Atlas Vector Search retrieved.

5

MongoDB and partners also provide the following developer resources:

Tip

See also:

Back

Spring AI