Build a Local RAG Implementation with Atlas Vector Search
On this page
This tutorial demonstrates how to implement retrieval-augmented generation (RAG) locally, without the need for API keys or credits. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with Atlas Vector Search.
Specifically, you perform the following actions:
Create a local Atlas deployment or deploy a cluster on the cloud.
Set up the environment.
Use a local embedding model to generate vector embeddings.
Create an Atlas Vector Search index on your data.
Use a local LLM to answer questions on your data.
➤ Use the Select your language drop-down menu to set the language of the examples on this page.
Background
To complete this tutorial, you can either create a local Atlas deployment by using the Atlas CLI or deploy a cluster on the cloud. The Atlas CLI is the command-line interface for MongoDB Atlas, and you can use the Atlas CLI to interact with Atlas from the terminal for various tasks, including creating local Atlas deployments. To learn more, see Manage Local and Cloud Deployments from the Atlas CLI.
Note
Local Atlas deployments are intended for testing only. For production environments, deploy a cluster.
You also use the following open-source models in this tutorial:
Nomic Embed Text embedding model
Mistral 7B generative model
There are several ways to download and deploy LLMs locally. In this tutorial, you download Ollama and pull the open source models listed above to perform RAG tasks.
This tutorial also uses the Go language port of LangChain, a popular open-source LLM framework, to connect to these models and integrate them with Atlas Vector Search. If you prefer different models or a different framework, you can adapt this tutorial by replacing the Ollama model names or LangChain library components with their equivalents for your preferred setup.
There are several ways to download and deploy LLMs locally. In this tutorial, you download Ollama and pull the following open source models to perform RAG tasks:
Nomic Embed Text embedding model
Mistral 7B generative model
This tutorial also uses LangChain4j, a popular open-source LLM framework for Java, to connect to these models and integrate them with Atlas Vector Search. If you prefer different models or a different framework, you can adapt this tutorial by replacing the Ollama model names or LangChain4j library components with their equivalents for your preferred setup.
You also use the following open-source models in this tutorial:
mxbai-embed-large-v1 embedding model
Mistral 7B generative model
There are several ways to download and deploy LLMs locally. In this tutorial, you download the Mistral 7B model by using GPT4All, an open-source ecosystem for local LLM development.
When working through this tutorial, you use an interactive Python notebook. This environment allows you to create and execute individual code blocks without running the entire file each time.
You also use the following open-source models in this tutorial:
mxbai-embed-large-v1 embedding model
Mistral 7B generative model
There are several ways to download and deploy LLMs locally. In this tutorial, you download the Mistral 7B model by using GPT4All, an open-source ecosystem for local LLM development.
Prerequisites
To complete this tutorial, you must have the following:
The Atlas CLI installed and running v1.14.3 or later.
MongoDB Command Line Database Tools installed.
A terminal and code editor to run your Go project.
Go installed.
Ollama installed.
To complete this tutorial, you must have the following:
The Atlas CLI installed and running v1.14.3 or later.
MongoDB Command Line Database Tools installed.
Java Development Kit (JDK) version 8 or later.
An environment to set up and run a Java application. We recommend that you use an integrated development environment (IDE) such as IntelliJ IDEA or Eclipse IDE to configure Maven or Gradle to build and run your project.
Ollama installed.
To complete this tutorial, you must have the following:
The Atlas CLI installed and running v1.14.3 or later.
MongoDB Command Line Database Tools installed.
A Hugging Face Access Token with read access.
Git Large File Storage installed.
A terminal and code editor to run your Node.js project.
npm and Node.js installed.
To complete this tutorial, you must have the following:
The Atlas CLI installed and running v1.14.3 or later.
MongoDB Command Line Database Tools installed.
An interactive Python notebook that you can run locally. You can run interactive Python notebooks in VS Code. Ensure that your environment runs Python v3.10 or later.
Note
If you use a hosted service such as Colab, ensure that you have enough RAM to run this tutorial. Otherwise, you might experience performance issues.
Create a Local Deployment or Atlas Cluster
This tutorial requires a local or cloud Atlas deployment loaded with the sample AirBnB listings dataset to use as a vector database.
If you have an existing Atlas cluster running MongoDB version 6.0.11,
7.0.2, or later with the sample_airbnb.listingsAndReviews
sample data
loaded, you can skip this step.
You can create a local Atlas deployment using the Atlas CLI or deploy a cluster on the cloud.
You can create a local deployment using the Atlas CLI.
Connect from the Atlas CLI.
In your terminal, run atlas auth login
to authenticate with your
Atlas login credentials. To learn more, see
Connect from the Atlas CLI.
Note
If you don't have an existing Atlas account, run atlas setup
or create a new account.
Create a local deployment by using the Atlas CLI.
Run atlas deployments setup
and follow the prompts to create a
local deployment.
For detailed instructions, see Create a Local Atlas Deployment.
Load the sample data into your deployment.
Run the following command in your terminal to download the sample data:
curl https://atlas-education.s3.amazonaws.com/sampledata.archive -o sampledata.archive Run the following command to load the data into your deployment, replacing
<port-number>
with the port where you're hosting the deployment:mongorestore --archive=sampledata.archive --port=<port-number> Note
You must install MongoDB Command Line Database Tools to access the
mongorestore
command.
You can create and deploy a new cluster using the Atlas CLI or Atlas UI. Ensure that you preload the new cluster with the sample data.
To learn how to load the sample data provided by Atlas into your cluster, see Load Sample Data.
For detailed instructions, see Create a Cluster.
Set Up the Environment
In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:
Create a .env
file.
In your project, create a .env
file to store your connection string.
ATLAS_CONNECTION_STRING = "<connection-string>"
Replace the <connection-string>
placeholder value with your Atlas
connection string.
If you're using a local Atlas deployment,
your connection string follows this format, replacing
<port-number>
with the port for your local deployment.
ATLAS_CONNECTION_STRING = "mongodb://localhost:<port-number>/?directConnection=true"
If you're using an Atlas cluster, your connection string
follows this format, replacing "<connection-string>";
with your Atlas cluster's SRV connection string:
ATLAS_CONNECTION_STRING = "<connection-string>"
Note
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:
Create your Java project and install dependencies.
From your IDE, create a Java project named
local-rag-mongodb
using Maven or Gradle.Add the following dependencies, depending on your package manager:
If you are using Maven, add the following dependencies to the
dependencies
array in your project'spom.xml
file:pom.xml<dependencies> <!-- MongoDB Java Sync Driver v5.2.0 or later --> <dependency> <groupId>org.mongodb</groupId> <artifactId>mongodb-driver-sync</artifactId> <version>[5.2.0,)</version> </dependency> <!-- Java library for working with Ollama --> <dependency> <groupId>dev.langchain4j</groupId> <artifactId>langchain4j-ollama</artifactId> <version>0.35.0</version> </dependency> </dependencies> If you are using Gradle, add the following to the
dependencies
array in your project'sbuild.gradle
file:build.gradledependencies { // MongoDB Java Sync Driver v5.2.0 or later implementation 'org.mongodb:mongodb-driver-sync:[5.2.0,)' // Java library for working with Ollama implementation 'dev.langchain4j:langchain4j-ollama:0.35.0' } Run your package manager to install the dependencies to your project.
Set your environment variable.
Note
This example sets the variable in the IDE. Production applications might manage environment variables through a deployment configuration, CI/CD pipeline, or secrets manager, but you can adapt the provided code to fit your use case.
In your IDE, create a new configuration template and add the following variables to your project:
If you are using IntelliJ IDEA, create a new Application run configuration template, then add your variables as semicolon-separated values in the Environment variables field (for example,
FOO=123;BAR=456
). Apply the changes and click OK.To learn more, see the Create a run/debug configuration from a template section of the IntelliJ IDEA documentation.
If you are using Eclipse, create a new Java Application launch configuration, then add each variable as a new key-value pair in the Environment tab. Apply the changes and click OK.
To learn more, see the Creating a Java application launch configuration section of the Eclipse IDE documentation.
Replace the <port-number>
with the port for your local deployment.
Your connection string should follow the following format:
ATLAS_CONNECTION_STRING = "mongodb://localhost:<port-number>/?directConnection=true"
Replace the <connection-string>
placeholder value with the SRV
connection string for
your Atlas cluster.
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:
Create a .env
file.
In your project, create a .env
file to store your connection string.
ATLAS_CONNECTION_STRING = "<connection-string>"
Replace the <connection-string>
placeholder value with your Atlas
connection string.
If you're using a local Atlas deployment,
your connection string follows this format, replacing
<port-number>
with the port for your local deployment.
ATLAS_CONNECTION_STRING = "mongodb://localhost:<port-number>/?directConnection=true";
If you're using an Atlas cluster, your connection string
follows this format, replacing "<connection-string>";
with your Atlas cluster's SRV connection string:
ATLAS_CONNECTION_STRING = "<connection-string>";
Note
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
Note
Minimum Node.js Version Requirements
Node.js v20.x introduced the --env-file
option. If you are using an
older version of Node.js, add the dotenv
package to your project, or
use a different method to manage your environment variables.
In this section, you set up the environment for this tutorial.
Define your Atlas connection string.
If you're using a local Atlas deployment,
run the following code in your notebook, replacing <port-number>
with the port for your local deployment.
ATLAS_CONNECTION_STRING = ("mongodb://localhost:<port-number>/?directConnection=true")
If you're using an Atlas cluster,
run the following code in your notebook, replacing <connection-string>
with your Atlas cluster's SRV connection string:
ATLAS_CONNECTION_STRING = ("<connection-string>")
Note
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
Generate Embeddings with a Local Model
In this section, you load an embedding model locally and
generate vector embeddings by using data from the
sample_airbnb database,
which contains a single collection called listingsAndReviews
.
Download the local embedding model.
This example uses the nomic-embed-text model from Ollama.
Run the following command to pull the embedding model:
ollama pull nomic-embed-text
Generate embeddings.
Create a
common
directory to store code that you'll reuse in multiple steps.mkdir common && cd common Create a file called
get-embeddings.go
, and paste the following code into it:get-embeddings.gopackage common import ( "context" "log" "github.com/tmc/langchaingo/llms/ollama" ) func GetEmbeddings(documents []string) [][]float32 { llm, err := ollama.New(ollama.WithModel("nomic-embed-text")) if err != nil { log.Fatalf("failed to connect to ollama: %v", err) } ctx := context.Background() embs, err := llm.CreateEmbedding(ctx, documents) if err != nil { log.Fatalf("failed to create ollama embedding: %v", err) } return embs } To simplify marshalling and unmarshalling documents in this collection to and from BSON, create a file called
models.go
and paste the following code into it:models.gopackage common import ( "time" "go.mongodb.org/mongo-driver/bson/primitive" ) type Image struct { ThumbnailURL string `bson:"thumbnail_url"` MediumURL string `bson:"medium_url"` PictureURL string `bson:"picture_url"` XLPictureURL string `bson:"xl_picture_url"` } type Host struct { ID string `bson:"host_id"` URL string `bson:"host_url"` Name string `bson:"host_name"` Location string `bson:"host_location"` About string `bson:"host_about"` ThumbnailURL string `bson:"host_thumbnail_url"` PictureURL string `bson:"host_picture_url"` Neighborhood string `bson:"host_neighborhood"` IsSuperhost bool `bson:"host_is_superhost"` HasProfilePic bool `bson:"host_has_profile_pic"` IdentityVerified bool `bson:"host_identity_verified"` ListingsCount int32 `bson:"host_listings_count"` TotalListingsCount int32 `bson:"host_total_listings_count"` Verifications []string `bson:"host_verifications"` } type Location struct { Type string `bson:"type"` Coordinates []float64 `bson:"coordinates"` IsLocationExact bool `bson:"is_location_exact"` } type Address struct { Street string `bson:"street"` Suburb string `bson:"suburb"` GovernmentArea string `bson:"government_area"` Market string `bson:"market"` Country string `bson:"Country"` CountryCode string `bson:"country_code"` Location Location `bson:"location"` } type Availability struct { Thirty int32 `bson:"availability_30"` Sixty int32 `bson:"availability_60"` Ninety int32 `bson:"availability_90"` ThreeSixtyFive int32 `bson:"availability_365"` } type ReviewScores struct { Accuracy int32 `bson:"review_scores_accuracy"` Cleanliness int32 `bson:"review_scores_cleanliness"` CheckIn int32 `bson:"review_scores_checkin"` Communication int32 `bson:"review_scores_communication"` Location int32 `bson:"review_scores_location"` Value int32 `bson:"review_scores_value"` Rating int32 `bson:"review_scores_rating"` } type Review struct { ID string `bson:"_id"` Date time.Time `bson:"date,omitempty"` ListingId string `bson:"listing_id"` ReviewerId string `bson:"reviewer_id"` ReviewerName string `bson:"reviewer_name"` Comments string `bson:"comments"` } type Listing struct { ID string `bson:"_id"` ListingURL string `bson:"listing_url"` Name string `bson:"name"` Summary string `bson:"summary"` Space string `bson:"space"` Description string `bson:"description"` NeighborhoodOverview string `bson:"neighborhood_overview"` Notes string `bson:"notes"` Transit string `bson:"transit"` Access string `bson:"access"` Interaction string `bson:"interaction"` HouseRules string `bson:"house_rules"` PropertyType string `bson:"property_type"` RoomType string `bson:"room_type"` BedType string `bson:"bed_type"` MinimumNights string `bson:"minimum_nights"` MaximumNights string `bson:"maximum_nights"` CancellationPolicy string `bson:"cancellation_policy"` LastScraped time.Time `bson:"last_scraped,omitempty"` CalendarLastScraped time.Time `bson:"calendar_last_scraped,omitempty"` FirstReview time.Time `bson:"first_review,omitempty"` LastReview time.Time `bson:"last_review,omitempty"` Accommodates int32 `bson:"accommodates"` Bedrooms int32 `bson:"bedrooms"` Beds int32 `bson:"beds"` NumberOfReviews int32 `bson:"number_of_reviews"` Bathrooms primitive.Decimal128 `bson:"bathrooms"` Amenities []string `bson:"amenities"` Price primitive.Decimal128 `bson:"price"` WeeklyPrice primitive.Decimal128 `bson:"weekly_price"` MonthlyPrice primitive.Decimal128 `bson:"monthly_price"` CleaningFee primitive.Decimal128 `bson:"cleaning_fee"` ExtraPeople primitive.Decimal128 `bson:"extra_people"` GuestsIncluded primitive.Decimal128 `bson:"guests_included"` Image Image `bson:"images"` Host Host `bson:"host"` Address Address `bson:"address"` Availability Availability `bson:"availability"` ReviewScores ReviewScores `bson:"review_scores"` Reviews []Review `bson:"reviews"` Embeddings []float32 `bson:"embeddings,omitempty"` } Return to the root directory.
cd ../ Create another file called
generate-embeddings.go
and paste the following code into it:generate-embeddings.go1 package main 2 3 import ( 4 "context" 5 "local-rag-mongodb/common" // Module that contains the models and GetEmbeddings function 6 "log" 7 "os" 8 9 "github.com/joho/godotenv" 10 "go.mongodb.org/mongo-driver/bson" 11 "go.mongodb.org/mongo-driver/mongo" 12 "go.mongodb.org/mongo-driver/mongo/options" 13 ) 14 15 func main() { 16 ctx := context.Background() 17 18 if err := godotenv.Load(); err != nil { 19 log.Println("no .env file found") 20 } 21 22 // Connect to your Atlas cluster 23 uri := os.Getenv("ATLAS_CONNECTION_STRING") 24 if uri == "" { 25 log.Fatal("set your 'ATLAS_CONNECTION_STRING' environment variable.") 26 } 27 clientOptions := options.Client().ApplyURI(uri) 28 client, err := mongo.Connect(ctx, clientOptions) 29 if err != nil { 30 log.Fatalf("failed to connect to the server: %v", err) 31 } 32 defer func() { _ = client.Disconnect(ctx) }() 33 34 // Set the namespace 35 coll := client.Database("sample_airbnb").Collection("listingsAndReviews") 36 37 filter := bson.D{ 38 {"$and", 39 bson.A{ 40 bson.D{ 41 {"$and", 42 bson.A{ 43 bson.D{{"summary", bson.D{{"$exists", true}}}}, 44 bson.D{{"summary", bson.D{{"$ne", ""}}}}, 45 }, 46 }}, 47 bson.D{{"embeddings", bson.D{{"$exists", false}}}}, 48 }}, 49 } 50 51 findOptions := options.Find().SetLimit(250) 52 53 cursor, err := coll.Find(ctx, filter, findOptions) 54 if err != nil { 55 log.Fatalf("failed to retrieve data from the server: %v", err) 56 } 57 58 var listings []common.Listing 59 if err = cursor.All(ctx, &listings); err != nil { 60 log.Fatalf("failed to unmarshal retrieved docs to model objects: %v", err) 61 } 62 63 var summaries []string 64 for _, listing := range listings { 65 summaries = append(summaries, listing.Summary) 66 } 67 68 log.Println("Generating embeddings.") 69 embeddings := common.GetEmbeddings(summaries) 70 71 updateDocuments := make([]mongo.WriteModel, len(listings)) 72 for i := range updateDocuments { 73 updateDocuments[i] = mongo.NewUpdateOneModel(). 74 SetFilter(bson.D{{"_id", listings[i].ID}}). 75 SetUpdate(bson.D{{"$set", bson.D{{"embeddings", embeddings[i]}}}}) 76 } 77 78 bulkWriteOptions := options.BulkWrite().SetOrdered(false) 79 80 result, err := coll.BulkWrite(ctx, updateDocuments, bulkWriteOptions) 81 if err != nil { 82 log.Fatalf("failed to update documents: %v", err) 83 } 84 85 log.Printf("%d documents updated successfully.", result.MatchedCount) 86 } In this example, we set a limit of 250 documents when generating embeddings. The process to generate embeddings for the more than 5000 documents in the collection is slow. If you want to change the number of documents you're generating embeddings for:
Change the number of documents: Adjust the
.SetLimit(250)
number in theFind()
options in line 52.Generate embeddings for all documents: Omit the options in the
Find()
call in line 54.
Run the following command to execute the code:
go run generate-embeddings.go 2024/10/10 15:49:23 Generating embeddings. 2024/10/10 15:49:28 250 documents updated successfully.
Download the local embedding model.
Run the following command to pull the nomic-embed-text model from Ollama:
ollama pull nomic-embed-text
Define your model and method to generate vector embeddings.
Create a file called OllamaModels.java
and paste the
following code.
This code defines the local Ollama embedding and chat models that you'll use in your project. We'll work with the chat model in a later step. You can adapt or create additional models as needed for your preferred setup.
This code also defines two methods to generate embeddings for a given input using the embedding model that you downloaded previously:
Multiple Inputs: The
getEmbeddings
method accepts an array of text inputs (List<String>
), allowing you to create multiple embeddings in a single API call. The method converts the API-provided arrays of floats to BSON arrays of doubles for storing in your Atlas cluster.Single Input: The
getEmbedding
method accepts a singleString
, which represents a query you want to make against your vector data. The method converts the API-provided array of floats to a BSON array of doubles to use when querying your collection.
import dev.langchain4j.data.embedding.Embedding; import dev.langchain4j.data.segment.TextSegment; import dev.langchain4j.model.ollama.OllamaChatModel; import dev.langchain4j.model.ollama.OllamaEmbeddingModel; import dev.langchain4j.model.output.Response; import org.bson.BsonArray; import org.bson.BsonDouble; import java.util.List; import static java.time.Duration.ofSeconds; public class OllamaModels { private static final String host = "http://localhost:11434"; private static OllamaEmbeddingModel embeddingModel; private static OllamaChatModel chatModel; /** * Returns the Ollama embedding model used by the getEmbeddings() and getEmbedding() methods * to generate vector embeddings. */ public static OllamaEmbeddingModel getEmbeddingModel() { if (embeddingModel == null) { embeddingModel = OllamaEmbeddingModel.builder() .timeout(ofSeconds(10)) .modelName("nomic-embed-text") .baseUrl(host) .build(); } return embeddingModel; } /** * Returns the Ollama chat model interface used by the createPrompt() method * to process queries and generate responses. */ public static OllamaChatModel getChatModel() { if (chatModel == null) { chatModel = OllamaChatModel.builder() .timeout(ofSeconds(25)) .modelName("mistral") .baseUrl(host) .build(); } return chatModel; } /** * Takes an array of strings and returns a collection of BSON array embeddings * to store in the database. */ public static List<BsonArray> getEmbeddings(List<String> texts) { List<TextSegment> textSegments = texts.stream() .map(TextSegment::from) .toList(); Response<List<Embedding>> response = getEmbeddingModel().embedAll(textSegments); return response.content().stream() .map(e -> new BsonArray( e.vectorAsList().stream() .map(BsonDouble::new) .toList())) .toList(); } /** * Takes a single string and returns a BSON array embedding to * use in a vector query. */ public static BsonArray getEmbedding(String text) { Response<Embedding> response = getEmbeddingModel().embed(text); return new BsonArray( response.content().vectorAsList().stream() .map(BsonDouble::new) .toList()); } }
Define code to generate embeddings from the sample data.
Create a file named EmbeddingGenerator.java
and paste the following code.
This code uses the getEmbeddings
method and the MongoDB
Java Sync Driver to do the following:
Connect to your local Atlas deployment or Atlas cluster.
Get a subset of documents from the
sample_airbnb.listingsAndReviews
collection that have a non-emptysummary
field.Note
For demonstration purposes, we set a
limit
of 250 documents to reduce the processing time. You can adjust or remove this limit as needed to better suit your use case.Generate an embedding from each document's
summary
field using thegetEmbeddings
method that you defined previously.Update each document with a new
embedding
field that contains the corresponding embedding value.
import com.mongodb.MongoException; import com.mongodb.bulk.BulkWriteResult; import com.mongodb.client.MongoClient; import com.mongodb.client.MongoClients; import com.mongodb.client.MongoCollection; import com.mongodb.client.MongoCursor; import com.mongodb.client.MongoDatabase; import com.mongodb.client.model.BulkWriteOptions; import com.mongodb.client.model.Filters; import com.mongodb.client.model.Projections; import com.mongodb.client.model.UpdateOneModel; import com.mongodb.client.model.Updates; import com.mongodb.client.model.WriteModel; import org.bson.BsonArray; import org.bson.Document; import org.bson.conversions.Bson; import java.util.ArrayList; import java.util.List; public class EmbeddingGenerator { public static void main(String[] args) { String uri = System.getenv("ATLAS_CONNECTION_STRING"); if (uri == null || uri.isEmpty()) { throw new RuntimeException("ATLAS_CONNECTION_STRING env variable is not set or is empty."); } // establish connection and set namespace try (MongoClient mongoClient = MongoClients.create(uri)) { MongoDatabase database = mongoClient.getDatabase("sample_airbnb"); MongoCollection<Document> collection = database.getCollection("listingsAndReviews"); // define parameters for the find() operation // NOTE: this example uses a limit to reduce processing time Bson projectionFields = Projections.fields( Projections.include("_id", "summary")); Bson filterSummary = Filters.ne("summary", ""); int limit = 250; try (MongoCursor<Document> cursor = collection .find(filterSummary) .projection(projectionFields) .limit(limit) .iterator()) { List<String> summaries = new ArrayList<>(); List<String> documentIds = new ArrayList<>(); while (cursor.hasNext()) { Document document = cursor.next(); String summary = document.getString("summary"); String id = document.get("_id").toString(); summaries.add(summary); documentIds.add(id); } // generate embeddings for the summary in each document // and add to the document to the 'embeddings' array field System.out.println("Generating embeddings for " + summaries.size() + " documents."); System.out.println("This operation may take up to several minutes."); List<BsonArray> embeddings = OllamaModels.getEmbeddings(summaries); List<WriteModel<Document>> updateDocuments = new ArrayList<>(); for (int j = 0; j < summaries.size(); j++) { UpdateOneModel<Document> updateDoc = new UpdateOneModel<>( Filters.eq("_id", documentIds.get(j)), Updates.set("embeddings", embeddings.get(j))); updateDocuments.add(updateDoc); } // bulk write the updated documents to the 'listingsAndReviews' collection int result = performBulkWrite(updateDocuments, collection); System.out.println("Added embeddings successfully to " + result + " documents."); } } catch (MongoException me) { throw new RuntimeException("Failed to connect to MongoDB", me); } catch (Exception e) { throw new RuntimeException("Operation failed: ", e); } } /** * Performs a bulk write operation on the specified collection. */ private static int performBulkWrite(List<WriteModel<Document>> updateDocuments, MongoCollection<Document> collection) { if (updateDocuments.isEmpty()) { return 0; } BulkWriteResult result; try { BulkWriteOptions options = new BulkWriteOptions().ordered(false); result = collection.bulkWrite(updateDocuments, options); return result.getModifiedCount(); } catch (MongoException me) { throw new RuntimeException("Failed to insert documents", me); } } }
Download the local embedding model.
This example uses the mixedbread-ai/mxbai-embed-large-v1 model from the Hugging Face model hub. The simplest method to download the model files is to clone the repository using Git with Git Large File Storage. Hugging Face requires a user access token or Git over SSH to authenticate your request to clone the repository.
git clone https://<your-hugging-face-username>:<your-hugging-face-user-access-token>@huggingface.co/mixedbread-ai/mxbai-embed-large-v1
git clone git@hf.co:mixedbread-ai/mxbai-embed-large-v1
Tip
Git Large File Storage
The Hugging Face model files are large, and require Git Large File Storage (git-lfs) to clone the repositories. If you see errors related to large file storage, ensure you have installed git-lfs.
Get the local path to the model files.
Get the path to the local model files on your machine. This is the parent directory that contains the git repository you just cloned. If you cloned the model repository inside the project directory you created for this tutorial, the parent directory path should resemble:
/Users/<username>/local-rag-mongodb
Check the model directory and make sure it contains an onnx
directory
that has a model_quantized.onnx
file:
cd mxbai-embed-large-v1/onnx ls
model.onnx model_fp16.onnx model_quantized.onnx
Generate embeddings.
Navigate back to the
local-rag-mongodb
parent directory.Create a file called
get-embeddings.js
, and paste the following code into it:get-embeddings.jsimport { env, pipeline } from '@xenova/transformers'; // Function to generate embeddings for given data export async function getEmbeddings(data) { // Replace this path with the parent directory that contains the model files env.localModelPath = '/Users/<username>/local-rag-mongodb/'; env.allowRemoteModels = false; const task = 'feature-extraction'; const model = 'mxbai-embed-large-v1'; const embedder = await pipeline( task, model); const results = await embedder(data, { pooling: 'mean', normalize: true }); return Array.from(results.data); } Replace the
'/Users/<username>/local-rag-mongodb/'
with the local path from the prior step.Create another file called
generate-embeddings.js
and paste the following code into it:generate-embeddings.js1 import { MongoClient } from 'mongodb'; 2 import { getEmbeddings } from './get-embeddings.js'; 3 4 async function run() { 5 const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING); 6 7 try { 8 // Connect to your local MongoDB deployment 9 await client.connect(); 10 const db = client.db("sample_airbnb"); 11 const collection = db.collection("listingsAndReviews"); 12 13 const filter = { '$and': [ 14 { 'summary': { '$exists': true, '$ne': null } }, 15 { 'embeddings': { '$exists': false } } 16 ]}; 17 18 // This is a long-running operation for all docs in the collection, 19 // so we limit the docs for this example 20 const cursor = collection.find(filter).limit(50); 21 22 // To verify that you have the local embedding model configured properly, 23 // try generating an embedding for one document 24 const firstDoc = await cursor.next(); 25 if (!firstDoc) { 26 console.log('No document found.'); 27 return; 28 } 29 30 const firstDocEmbeddings = await getEmbeddings(firstDoc.summary); 31 console.log(firstDocEmbeddings); 32 33 // After confirming you are successfully generating embeddings, 34 // uncomment the following code to generate embeddings for all docs. 35 /* cursor.rewind(); // Reset the cursor to process documents again 36 * console.log("Generating embeddings for documents. Standby."); 37 * let updatedDocCount = 0; 38 * 39 * for await (const doc of cursor) { 40 * const text = doc.summary; 41 * const embeddings = await getEmbeddings(text); 42 * await collection.updateOne({ "_id": doc._id }, 43 * { 44 * "$set": { 45 * "embeddings": embeddings 46 * } 47 * } 48 * ); 49 * updatedDocCount += 1; 50 * } 51 * console.log("Count of documents updated: " + updatedDocCount); 52 */ 53 } catch (err) { 54 console.log(err.stack); 55 } 56 finally { 57 await client.close(); 58 } 59 } 60 run().catch(console.dir); This code includes a few lines to test that you have correctly downloaded the model and are using the correct path. Run the following command to execute the code:
node --env-file=.env generate-embeddings.js Tensor { dims: [ 1, 1024 ], type: 'float32', data: Float32Array(1024) [ -0.01897735893726349, -0.001120976754464209, -0.021224822849035263, -0.023649735376238823, -0.03350808471441269, -0.0014186901971697807, -0.009617107920348644, 0.03344292938709259, 0.05424851179122925, -0.025904450565576553, 0.029770011082291603, -0.0006215018220245838, 0.011056603863835335, -0.018984895199537277, 0.03985185548663139, -0.015273082070052624, -0.03193040192127228, 0.018376577645540237, -0.02236943319439888, 0.01433168537914753, 0.02085157483816147, -0.005689046811312437, -0.05541415512561798, -0.055907104164361954, -0.019112611189484596, 0.02196515165269375, 0.027313007041811943, -0.008618313819169998, 0.045496534556150436, 0.06271681934595108, -0.0028660669922828674, -0.02433634363114834, 0.02016191929578781, -0.013882477767765522, -0.025465600192546844, 0.0000950733374338597, 0.018200192600488663, -0.010413561016321182, -0.002004098379984498, -0.058351870626211166, 0.01749623566865921, -0.013926318846642971, -0.00278360559605062, -0.010333008132874966, 0.004406726453453302, 0.04118744656443596, 0.02210155501961708, -0.016340743750333786, 0.004163357429206371, -0.018561601638793945, 0.0021984230261296034, -0.012378614395856857, 0.026662321761250496, -0.006476820446550846, 0.001278138137422502, -0.010084952227771282, -0.055993322283029556, -0.015850437805056572, 0.015145729295909405, 0.07512971013784409, -0.004111358895897865, -0.028162647038698196, 0.023396577686071396, -0.01159974467009306, 0.021751703694462776, 0.006198467221111059, 0.014084039255976677, -0.0003913900291081518, 0.006310020107775927, -0.04500332102179527, 0.017774192616343498, -0.018170733004808426, 0.026185045018792152, -0.04488714039325714, -0.048510149121284485, 0.015152698382735252, 0.012136898003518581, 0.0405895821750164, -0.024783289059996605, -0.05514788627624512, 0.03484730422496796, -0.013530988246202469, 0.0319477915763855, 0.04537525027990341, -0.04497901350259781, 0.009621822275221348, -0.013845544308423996, 0.0046155862510204315, 0.03047163411974907, 0.0058857654221355915, 0.005858785007148981, 0.01180865429341793, 0.02734190598130226, 0.012322399765253067, 0.03992653638124466, 0.015777742490172386, 0.017797520384192467, 0.02265017107129097, -0.018233606591820717, 0.02064627595245838, ... 924 more items ], size: 1024 } Optionally, after you have confirmed you are successfully generating embeddings with the local model, you can uncomment the code in lines 35-52 to generate embeddings for all the documents in the collection. Save the file.
Then, run the command to execute the code:
node --env-file=.env generate-embeddings.js [ Tensor { dims: [ 1024 ], type: 'float32', data: Float32Array(1024) [ -0.043243519961833954, 0.01316747535020113, -0.011639945209026337, -0.025046885013580322, 0.005129443947225809, -0.02003324404358864, 0.005245734006166458, 0.10105721652507782, 0.05425914749503136, -0.010824322700500488, 0.021903572604060173, 0.048009492456912994, 0.01291663944721222, -0.015903260558843613, -0.008034848608076572, -0.003592714900150895, -0.029337648302316666, 0.02282896265387535, -0.029112281277775764, 0.011099508963525295, -0.012238143011927605, -0.008351574651896954, -0.048714976757764816, 0.001015961286611855, 0.02252192236483097, 0.04426417499780655, 0.03514830768108368, -0.02088250033557415, 0.06391220539808273, 0.06896235048770905, -0.015386332757771015, -0.019206153228878975, 0.015263230539858341, -0.00019019744649995118, -0.032121095806360245, 0.015855342149734497, 0.05055809020996094, 0.004083932377398014, 0.026945054531097412, -0.0505746565759182, -0.009507855400443077, -0.012497996911406517, 0.06249537691473961, -0.04026378318667412, 0.010749109089374542, 0.016748877242207527, -0.0235306303948164, -0.03941794112324715, 0.027474915608763695, -0.02181144617497921, 0.0026422827504575253, 0.005104491952806711, 0.027314607053995132, 0.019283341243863106, 0.005245842970907688, -0.018712762743234634, -0.08618085831403732, 0.003314188914373517, 0.008071620017290115, 0.05356570705771446, -0.008000597357749939, 0.006983411032706499, -0.0070550404489040375, -0.043323490768671036, 0.03490140289068222, 0.03810165822505951, 0.0406375490128994, -0.0032191979698836803, 0.01489361934363842, -0.01609957590699196, -0.006372962612658739, 0.03360277786850929, -0.014810526743531227, -0.00925799086689949, -0.01885424554347992, 0.0182492695748806, 0.009002899751067162, -0.004713123198598623, -0.00846288911998272, -0.012471121735870838, -0.0080558517947793, 0.0135461101308465, 0.03335557505488396, -0.0027410900220274925, -0.02145615592598915, 0.01378028653562069, 0.03708091005682945, 0.03519297018647194, 0.014239554293453693, 0.02219904027879238, 0.0015641176141798496, 0.02624501660466194, 0.022713981568813324, -0.004414170514792204, 0.026919621974229813, -0.002607459668070078, -0.04017219692468643, -0.003570320550352335, -0.022905709221959114, 0.030657364055514336, ... 924 more items ], size: 1024 } ] Generating embeddings for documents. Standby. Count of documents updated: 50
Paste the following code into your notebook.
This code performs the following actions:
Connects to your local Atlas deployment or Atlas cluster and selects the
sample_airbnb.listingsAndReviews
collection.Loads the mixedbread-ai/mxbai-embed-large-v1 model from the Hugging Face model hub and saves it locally. To learn more, see Downloading models.
Defines a function that uses the model to generate vector embeddings.
For a subset of documents in the collection:
Generates an embedding from the document's
summary
field.Updates the document by creating a new field called
embeddings
that contains the embedding.
from pymongo import MongoClient from sentence_transformers import SentenceTransformer # Connect to your local Atlas deployment or Atlas Cluster client = MongoClient(ATLAS_CONNECTION_STRING) # Select the sample_airbnb.listingsAndReviews collection collection = client["sample_airbnb"]["listingsAndReviews"] # Load the embedding model (https://huggingface.co/sentence-transformers/mixedbread-ai/mxbai-embed-large-v1) model_path = "<model-path>" model = SentenceTransformer('mixedbread-ai/mxbai-embed-large-v1') model.save(model_path) model = SentenceTransformer(model_path) # Define function to generate embeddings def get_embedding(text): return model.encode(text).tolist() # Filters for only documents with a summary field and without an embeddings field filter = { '$and': [ { 'summary': { '$exists': True, '$ne': None } }, { 'embeddings': { '$exists': False } } ] } # Creates embeddings for subset of the collection updated_doc_count = 0 for document in collection.find(filter).limit(50): text = document['summary'] embedding = get_embedding(text) collection.update_one({ '_id': document['_id'] }, { "$set": { 'embeddings': embedding } }, upsert=True) updated_doc_count += 1 print("Documents updated: {}".format(updated_doc_count)) Documents updated: 50
This code might take several minutes to run. After it's finished, you can
view your vector embeddings by connecting to your local deployment
from mongosh
or your application using your deployment's
connection string. Then you can run read operations on the sample_airbnb.listingsAndReviews
collection.
You can view your vector embeddings in the Atlas UI
by navigating to the sample_airbnb.listingsAndReviews
collection in your
cluster and expanding the fields in a document.
Tip
You can convert the embeddings in the sample data to BSON vectors for efficient storage and ingestion of vectors in Atlas. To learn more, see how to convert native embeddings to BSON vectors.
Create the Atlas Vector Search Index
To enable vector search on the sample_airbnb.listingsAndReviews
collection, create an Atlas Vector Search index.
This tutorial walks you through how to create an Atlas Vector Search index programmatically with a supported MongoDB Driver or using the Atlas CLI. For information on other ways to create an Atlas Vector Search index, see How to Index Fields for Vector Search.
Note
To create an Atlas Vector Search index, you must have Project Data Access Admin
or higher access to the Atlas project.
To create an Atlas Vector Search index for a collection using the MongoDB Go driver v1.16.0 or later, perform the following steps:
Define the Atlas Vector Search index.
Create a file named vector-index.go
and paste the following code in
the file:
package main import ( "context" "fmt" "log" "os" "time" "github.com/joho/godotenv" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) func main() { ctx := context.Background() if err := godotenv.Load(); err != nil { log.Println("no .env file found") } // Connect to your Atlas cluster uri := os.Getenv("ATLAS_CONNECTION_STRING") if uri == "" { log.Fatal("set your 'ATLAS_CONNECTION_STRING' environment variable.") } clientOptions := options.Client().ApplyURI(uri) client, err := mongo.Connect(ctx, clientOptions) if err != nil { log.Fatalf("failed to connect to the server: %v", err) } defer func() { _ = client.Disconnect(ctx) }() // Set the namespace coll := client.Database("sample_airbnb").Collection("listingsAndReviews") indexName := "vector_index" opts := options.SearchIndexes().SetName(indexName).SetType("vectorSearch") type vectorDefinitionField struct { Type string `bson:"type"` Path string `bson:"path"` NumDimensions int `bson:"numDimensions"` Similarity string `bson:"similarity"` } type vectorDefinition struct { Fields []vectorDefinitionField `bson:"fields"` } indexModel := mongo.SearchIndexModel{ Definition: vectorDefinition{ Fields: []vectorDefinitionField{{ Type: "vector", Path: "embeddings", NumDimensions: 768, Similarity: "cosine"}}, }, Options: opts, } log.Println("Creating the index.") searchIndexName, err := coll.SearchIndexes().CreateOne(ctx, indexModel) if err != nil { log.Fatalf("failed to create the search index: %v", err) } // Await the creation of the index. log.Println("Polling to confirm successful index creation.") log.Println("NOTE: This may take up to a minute.") searchIndexes := coll.SearchIndexes() var doc bson.Raw for doc == nil { cursor, err := searchIndexes.List(ctx, options.SearchIndexes().SetName(searchIndexName)) if err != nil { fmt.Errorf("failed to list search indexes: %w", err) } if !cursor.Next(ctx) { break } name := cursor.Current.Lookup("name").StringValue() queryable := cursor.Current.Lookup("queryable").Boolean() if name == searchIndexName && queryable { doc = cursor.Current } else { time.Sleep(5 * time.Second) } } log.Println("Name of Index Created: " + searchIndexName) }
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 768
vector
dimensions and measures similarity using cosine
.
To create an Atlas Vector Search index for a collection using the MongoDB Java driver v5.2.0 or later, perform the following steps:
Define a method to create the Atlas Vector Search index.
Create a file named VectorIndex.java
and paste the following
code.
This code calls a createSearchIndexes()
helper
method, which takes your MongoCollection
object and creates an Atlas Vector Search
index on your collection using the following index definition:
Index the
embedding
field in a vectorSearch index type for thesample_airbnb.listingsAndReviews
collection. This field contains the embedding created using the embedding model.Enforce
768
vector dimensions and measure similarity between vectors usingcosine
.
import com.mongodb.MongoException; import com.mongodb.client.ListSearchIndexesIterable; import com.mongodb.client.MongoClient; import com.mongodb.client.MongoClients; import com.mongodb.client.MongoCollection; import com.mongodb.client.MongoCursor; import com.mongodb.client.MongoDatabase; import com.mongodb.client.model.SearchIndexModel; import com.mongodb.client.model.SearchIndexType; import org.bson.Document; import org.bson.conversions.Bson; import java.util.Collections; import java.util.List; public class VectorIndex { public static void main(String[] args) { String uri = System.getenv("ATLAS_CONNECTION_STRING"); if (uri == null || uri.isEmpty()) { throw new IllegalStateException("ATLAS_CONNECTION_STRING env variable is not set or is empty."); } // establish connection and set namespace try (MongoClient mongoClient = MongoClients.create(uri)) { MongoDatabase database = mongoClient.getDatabase("sample_airbnb"); MongoCollection<Document> collection = database.getCollection("listingsAndReviews"); // define the index details for the index model String indexName = "vector_index"; Bson definition = new Document( "fields", Collections.singletonList( new Document("type", "vector") .append("path", "embeddings") .append("numDimensions", 768) .append("similarity", "cosine"))); SearchIndexModel indexModel = new SearchIndexModel( indexName, definition, SearchIndexType.vectorSearch()); // create the index using the defined model try { List<String> result = collection.createSearchIndexes(Collections.singletonList(indexModel)); System.out.println("Successfully created a vector index named: " + result); } catch (Exception e) { throw new RuntimeException(e); } // wait for Atlas to build the index and make it queryable System.out.println("Polling to confirm the index has completed building."); System.out.println("It may take up to a minute for the index to build before you can query using it."); waitForIndexReady(collection, indexName); } catch (MongoException me) { throw new RuntimeException("Failed to connect to MongoDB ", me); } catch (Exception e) { throw new RuntimeException("Operation failed: ", e); } } /** * Polls the collection to check whether the specified index is ready to query. */ public static void waitForIndexReady(MongoCollection<Document> collection, String indexName) throws InterruptedException { ListSearchIndexesIterable<Document> searchIndexes = collection.listSearchIndexes(); while (true) { try (MongoCursor<Document> cursor = searchIndexes.iterator()) { if (!cursor.hasNext()) { break; } Document current = cursor.next(); String name = current.getString("name"); boolean queryable = current.getBoolean("queryable"); if (name.equals(indexName) && queryable) { System.out.println(indexName + " index is ready to query"); return; } else { Thread.sleep(500); } } } } }
Create the Atlas Vector Search index.
Save and run the file. The output resembles:
Successfully created a vector index named: [vector_index] Polling to confirm the index has completed building. It may take up to a minute for the index to build before you can query using it. vector_index index is ready to query
To create an Atlas Vector Search index for a collection using the MongoDB Node driver v6.6.0 or later, perform the following steps:
Define the Atlas Vector Search index.
Create a file named vector-index.js
and paste the following code in
the file:
import { MongoClient } from 'mongodb'; // Connect to your Atlas deployment const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING); async function run() { try { const database = client.db("sample_airbnb"); const collection = database.collection("listingsAndReviews"); // Define your Atlas Vector Search index const index = { name: "vector_index", type: "vectorSearch", definition: { "fields": [ { "type": "vector", "numDimensions": 1024, "path": "embeddings", "similarity": "cosine" } ] } } // Call the method to create the index const result = await collection.createSearchIndex(index); console.log(result); } finally { await client.close(); } } run().catch(console.dir);
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 1024
vector
dimensions and measures similarity using cosine
.
To create an Atlas Vector Search index for a collection using the PyMongo driver v4.7 or later, perform the following steps:
You can create the index directly from your application with the PyMongo driver. Paste and run the following code in your notebook:
from pymongo.operations import SearchIndexModel # Create your index model, then create the search index search_index_model = SearchIndexModel( definition = { "fields": [ { "type": "vector", "numDimensions": 1024, "path": "embeddings", "similarity": "cosine" } ] }, name = "vector_index", type = "vectorSearch" ) collection.create_search_index(model=search_index_model)
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 1024
vector
dimensions and measures similarity using cosine
.
To create an Atlas Vector Search index using the Atlas CLI, perform the following steps:
Define the Atlas Vector Search index.
Create a file named vector-index.json
and paste the following index
definition in the file:
{ "database": "sample_airbnb", "collectionName": "listingsAndReviews", "type": "vectorSearch", "name": "vector_index", "fields": [ { "type": "vector", "path": "embeddings", "numDimensions": 768, "similarity": "cosine" } ] }
This index definition specifies the following:
Index the
embeddings
field in a vectorSearch index type for thesample_airbnb.listingsAndReviews
collection. This field contains the embeddings created using the embedding model.Enforce
768
vector dimensions and measure similarity between vectors usingcosine
.
Create the Atlas Vector Search index.
Save the file in your project directory, and then run the following command
in your terminal, replacing <path-to-file>
with the path to the
vector-index.json
file that you created.
atlas deployments search indexes create --file <path-to-file>
For example, your path might resemble:
/Users/<username>/local-rag-mongodb/vector-index.json
.
Define the Atlas Vector Search index.
Create a file named vector-index.json
and paste the following index
definition in the file:
{ "database": "sample_airbnb", "collectionName": "listingsAndReviews", "type": "vectorSearch", "name": "vector_index", "fields": [ { "type": "vector", "path": "embeddings", "numDimensions": 768, "similarity": "cosine" } ] }
This index definition specifies the following:
Index the
embeddings
field in a vectorSearch index type for thesample_airbnb.listingsAndReviews
collection. This field contains the embeddings created using the embedding model.Enforce
768
vector dimensions and measure similarity between vectors usingcosine
.
Create the Atlas Vector Search index.
Save the file in your project directory, and then run the following command
in your terminal, replacing <path-to-file>
with the path to the
vector-index.json
file that you created.
atlas deployments search indexes create --file <path-to-file>
For example, your path might resemble:
/Users/<username>/local-rag-mongodb/vector-index.json
.
Define the Atlas Vector Search index.
Create a file named vector-index.json
and paste the following index
definition in the file.
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 1024
vector
dimensions and measures similarity using cosine
.
{ "database": "sample_airbnb", "collectionName": "listingsAndReviews", "type": "vectorSearch", "name": "vector_index", "fields": [ { "type": "vector", "path": "embeddings", "numDimensions": 1024, "similarity": "cosine" } ] }
Create the Atlas Vector Search index.
Save the file in your project directory, and then run the following command
in your terminal, replacing <path-to-file>
with the path to the
vector-index.json
file that you created.
atlas deployments search indexes create --file <path-to-file>
This path should resemble: /Users/<username>/local-rag-mongodb/vector-index.json
.
Define the Atlas Vector Search index.
Create a file named vector-index.json
and paste the following index
definition in the file.
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 1024
vector
dimensions and measures similarity using cosine
.
{ "database": "sample_airbnb", "collectionName": "listingsAndReviews", "type": "vectorSearch", "name": "vector_index", "fields": [ { "type": "vector", "path": "embeddings", "numDimensions": 1024, "similarity": "cosine" } ] }
Create the Atlas Vector Search index.
Save the file in your project directory, and then run the following command
in your terminal, replacing <path-to-file>
with the path to the
vector-index.json
file that you created.
atlas deployments search indexes create --file <path-to-file>
This path should resemble: /Users/<username>/local-rag-mongodb/vector-index.json
.
Answer Questions with the Local LLM
This section demonstrates a sample RAG implementation that you can run locally using Atlas Vector Search and Ollama.
Query the database for relevant documents.
Navigate to the
common
directory.cd common Create a file called
retrieve-documents.go
and paste the following code into it:retrieve-documents.gopackage common import ( "context" "log" "os" "github.com/joho/godotenv" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) type Document struct { Summary string `bson:"summary"` ListingURL string `bson:"listing_url"` Score float64 `bson:"score"` } func RetrieveDocuments(query string) []Document { ctx := context.Background() if err := godotenv.Load(); err != nil { log.Println("no .env file found") } // Connect to your Atlas cluster uri := os.Getenv("ATLAS_CONNECTION_STRING") if uri == "" { log.Fatal("set your 'ATLAS_CONNECTION_STRING' environment variable.") } clientOptions := options.Client().ApplyURI(uri) client, err := mongo.Connect(ctx, clientOptions) if err != nil { log.Fatalf("failed to connect to the server: %v", err) } defer func() { _ = client.Disconnect(ctx) }() // Set the namespace coll := client.Database("sample_airbnb").Collection("listingsAndReviews") var array []string array = append(array, query) queryEmbedding := GetEmbeddings(array) vectorSearchStage := bson.D{ {"$vectorSearch", bson.D{ {"index", "vector_index"}, {"path", "embeddings"}, {"queryVector", queryEmbedding[0]}, {"exact", true}, {"limit", 5}, }}} projectStage := bson.D{ {"$project", bson.D{ {"_id", 0}, {"summary", 1}, {"listing_url", 1}, {"score", bson.D{{"$meta", "vectorSearchScore"}}}, }}} cursor, err := coll.Aggregate(ctx, mongo.Pipeline{vectorSearchStage, projectStage}) if err != nil { log.Fatalf("failed to retrieve data from the server: %v", err) } var results []Document if err = cursor.All(ctx, &results); err != nil { log.Fatalf("failed to unmarshal retrieved docs to model objects: %v", err) } return results } This code performs a vector query on your local Atlas deployment or your Atlas cluster.
Run a test query to confirm you're getting the expected results. Move back to the project root directory.
cd ../ Create a new file called
test-query.go
, and paste the following code into it:test-query.gopackage main import ( "fmt" "local-rag-mongodb/common" // Module that contains the RetrieveDocuments function "log" "strings" ) func main() { query := "beach house" matchingDocuments := common.RetrieveDocuments(query) if matchingDocuments == nil { log.Fatal("No documents matched the query.\n") } var textDocuments strings.Builder for _, doc := range matchingDocuments { // Print the contents of the matching documents for verification fmt.Printf("Summary: %v\n", doc.Summary) fmt.Printf("Listing URL: %v\n", doc.ListingURL) fmt.Printf("Score: %v\n", doc.Score) // Build a single text string to use as the context for the QA textDocuments.WriteString("Summary: ") textDocuments.WriteString(doc.Summary) textDocuments.WriteString("\n") textDocuments.WriteString("Listing URL: ") textDocuments.WriteString(doc.ListingURL) textDocuments.WriteString("\n") } fmt.Printf("\nThe constructed context for the QA follows:\n\n") fmt.Printf(textDocuments.String()) } Run the following code to execute the query:
go run test-query.go Summary: "Lani Beach House" Aloha - Please do not reserve until reading about the State Tax in "Other Things to Note" section. Please do not reserve unless you agree to pay taxes to Hawaii Beach Homes directly. If you have questions, please inquire before booking. The home has been completely redecorated in a luxurious island style: vaulted ceilings, skylights, granite counter tops, stainless steel appliances and a gourmet kitchen are just some of the the features. All bedrooms have ocean views Listing URL: https://www.airbnb.com/rooms/11553333 Score: 0.85715651512146 Summary: This peaceful house in North Bondi is 300m to the beach and a minute's walk to cafes and bars. With 3 bedrooms, (can sleep up to 8) it is perfect for families, friends and pets. The kitchen was recently renovated and a new lounge and chairs installed. The house has a peaceful, airy, laidback vibe - a perfect beach retreat. Longer-term bookings encouraged. Parking for one car. A parking permit for a second car can also be obtained on request. Listing URL: https://www.airbnb.com/rooms/10423504 Score: 0.8425835371017456 Summary: There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places. Listing URL: https://www.airbnb.com/rooms/10488837 Score: 0.8403302431106567 Summary: Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities. Listing URL: https://www.airbnb.com/rooms/10317142 Score: 0.8367050886154175 Summary: This is a gorgeous home just off the main rd, with lots of sun and new amenities. room has own entrance with small deck, close proximity to the beach , bus to the junction , around the corner form all the cafes, bars and restaurants (2 mins). Listing URL: https://www.airbnb.com/rooms/11719579 Score: 0.8262639045715332 The constructed context for the QA follows: Summary: "Lani Beach House" Aloha - Please do not reserve until reading about the State Tax in "Other Things to Note" section. Please do not reserve unless you agree to pay taxes to Hawaii Beach Homes directly. If you have questions, please inquire before booking. The home has been completely redecorated in a luxurious island style: vaulted ceilings, skylights, granite counter tops, stainless steel appliances and a gourmet kitchen are just some of the the features. All bedrooms have ocean views Listing URL: https://www.airbnb.com/rooms/11553333 Summary: This peaceful house in North Bondi is 300m to the beach and a minute's walk to cafes and bars. With 3 bedrooms, (can sleep up to 8) it is perfect for families, friends and pets. The kitchen was recently renovated and a new lounge and chairs installed. The house has a peaceful, airy, laidback vibe - a perfect beach retreat. Longer-term bookings encouraged. Parking for one car. A parking permit for a second car can also be obtained on request. Listing URL: https://www.airbnb.com/rooms/10423504 Summary: There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places. Listing URL: https://www.airbnb.com/rooms/10488837 Summary: Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities. Listing URL: https://www.airbnb.com/rooms/10317142 Summary: This is a gorgeous home just off the main rd, with lots of sun and new amenities. room has own entrance with small deck, close proximity to the beach , bus to the junction , around the corner form all the cafes, bars and restaurants (2 mins). Listing URL: https://www.airbnb.com/rooms/11719579
Answer questions on your data.
Create a file called local-llm.go
and paste the following code:
package main import ( "context" "local-rag-mongodb/common" // Module that contains the RetrieveDocuments function "log" "strings" "github.com/tmc/langchaingo/llms" "github.com/tmc/langchaingo/llms/ollama" "github.com/tmc/langchaingo/prompts" ) func main() { // Retrieve documents from the collection that match the query const query = "beach house" matchingDocuments := common.RetrieveDocuments(query) if matchingDocuments == nil { log.Fatalf("no documents matched the query %q", query) } // Generate the text string from the matching documents to pass to the // LLM as context to answer the question var textDocuments strings.Builder for _, doc := range matchingDocuments { textDocuments.WriteString("Summary: ") textDocuments.WriteString(doc.Summary) textDocuments.WriteString("\n") textDocuments.WriteString("Listing URL: ") textDocuments.WriteString(doc.ListingURL) textDocuments.WriteString("\n") } // Have the LLM answer the question using the provided context llm, err := ollama.New(ollama.WithModel("mistral")) if err != nil { log.Fatalf("failed to initialize the Ollama Mistral model client: %v", err) } const question = `Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings.` template := prompts.NewPromptTemplate( `Use the following pieces of context to answer the question at the end. Context: {{.context}} Question: {{.question}}`, []string{"context", "question"}, ) prompt, err := template.Format(map[string]any{ "context": textDocuments.String(), "question": question, }) ctx := context.Background() completion, err := llms.GenerateFromSinglePrompt(ctx, llm, prompt) if err != nil { log.Fatalf("failed to generate a response from the given prompt: %q", prompt) } log.Println("Response: ", completion) }
This code does the following:
Creates an embedding for your query string.
Queries for relevant documents.
Prompts the LLM and returns the response. The generated response might vary.
Run the following code to complete your RAG implementation:
go run local-llm.go
2024/10/09 10:34:02 Response: Based on the context provided, here are some Airbnb listings for beach houses that you might find interesting: 1. Lani Beach House (Hawaii) - [Link](https://www.airbnb.com/rooms/11553333) 2. Peaceful North Bondi House (Australia) - [Link](https://www.airbnb.com/rooms/10423504) 3. Ocean Living! Secluded Secret Beach! (Florida, USA) - [Link](https://www.airbnb.com/rooms/10317142) 4. Gorgeous Home just off the main road (California, USA) - [Link](https://www.airbnb.com/rooms/11719579)
This section demonstrates a sample RAG implementation that you can run locally using Atlas Vector Search and Ollama.
Define code to run the local LLM.
Create a new file called LocalLLM.java
and paste the following code.
This code uses the getEmbedding
and retrieveDocuments
methods and
the Ollama chatmodel
to do the following:
Connect to your local Atlas deployment or your Atlas cluster
Generate an embedding for the query string using the
getEmbedding
method you defined previously.Query the collection for relevant documents using the
retrieveDocuments
method.Our query includes an aggregation pipeline with a projection stage to return only the
listing_url
,summary
, and vectorscore
fields. You can modify or remove this pipeline to better suit your data and use case.Create a context by concatenating a question with the retrieved documents using the
createPrompt
method.Feed the created prompt to the LLM
chatmodel
you defined previously to generate a response.Print the question and generated response to the console.
Note
For demonstration purposes, we also print the filled-in prompt with context information. You should remove this line in a production environment.
import com.mongodb.MongoException; import com.mongodb.client.MongoClient; import com.mongodb.client.MongoClients; import com.mongodb.client.MongoCollection; import com.mongodb.client.MongoDatabase; import com.mongodb.client.model.search.FieldSearchPath; import dev.langchain4j.data.message.AiMessage; import dev.langchain4j.model.input.Prompt; import dev.langchain4j.model.input.PromptTemplate; import dev.langchain4j.model.ollama.OllamaChatModel; import org.bson.BsonArray; import org.bson.BsonValue; import org.bson.Document; import org.bson.conversions.Bson; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import static com.mongodb.client.model.Aggregates.project; import static com.mongodb.client.model.Aggregates.vectorSearch; import static com.mongodb.client.model.Projections.exclude; import static com.mongodb.client.model.Projections.fields; import static com.mongodb.client.model.Projections.include; import static com.mongodb.client.model.Projections.metaVectorSearchScore; import static com.mongodb.client.model.search.SearchPath.fieldPath; import static com.mongodb.client.model.search.VectorSearchOptions.exactVectorSearchOptions; import static java.util.Arrays.asList; public class LocalLLM { // User input: the question to answer static String question = "Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings."; public static void main(String[] args) { String uri = System.getenv("ATLAS_CONNECTION_STRING"); if (uri == null || uri.isEmpty()) { throw new IllegalStateException("ATLAS_CONNECTION_STRING env variable is not set or is empty."); } // establish connection and set namespace try (MongoClient mongoClient = MongoClients.create(uri)) { MongoDatabase database = mongoClient.getDatabase("sample_airbnb"); MongoCollection<Document> collection = database.getCollection("listingsAndReviews"); // generate a response to the user question System.out.println("Question: " + question); try { createPrompt(question, collection); } catch (Exception e) { throw new RuntimeException("An error occurred while generating the response: ", e); } } catch (MongoException me) { throw new RuntimeException("Failed to connect to MongoDB ", me); } catch (Exception e) { throw new RuntimeException("Operation failed: ", e); } } /** * Returns a list of documents from the specified MongoDB collection that * match the user's question. * NOTE: Update or omit the projection stage to change the desired fields in the response */ public static List<Document> retrieveDocuments(String question, MongoCollection<Document> collection) { try { // generate the query embedding to use in the vector search BsonArray queryEmbeddingBsonArray = OllamaModels.getEmbedding(question); List<Double> queryEmbedding = new ArrayList<>(); for (BsonValue value : queryEmbeddingBsonArray.stream().toList()) { queryEmbedding.add(value.asDouble().getValue()); } // define the pipeline stages for the vector search index String indexName = "vector_index"; FieldSearchPath fieldSearchPath = fieldPath("embeddings"); int limit = 5; List<Bson> pipeline = asList( vectorSearch( fieldSearchPath, queryEmbedding, indexName, limit, exactVectorSearchOptions()), project( fields( exclude("_id"), include("listing_url"), include("summary"), metaVectorSearchScore("score")))); // run the query and return the matching documents List<Document> matchingDocuments = new ArrayList<>(); collection.aggregate(pipeline).forEach(matchingDocuments::add); return matchingDocuments; } catch (Exception e) { System.err.println("Error occurred while retrieving documents: " + e.getMessage()); return new ArrayList<>(); } } /** * Creates a templated prompt using the question and retrieved documents, then generates * a response using the local Ollama chat model. */ public static void createPrompt(String question, MongoCollection<Document> collection) { // Retrieve documents matching the user's question List<Document> retrievedDocuments = retrieveDocuments(question, collection); if (retrievedDocuments.isEmpty()) { System.out.println("No relevant documents found. Unable to generate a response."); return; } else System.out.println("Generating a response from the retrieved documents. This may take a few moments."); // Create a prompt template OllamaChatModel ollamaChatModel = OllamaModels.getChatModel(); PromptTemplate promptBuilder = PromptTemplate.from(""" Use the following pieces of context to answer the question at the end: {{information}} --------------- {{question}} """); // build the information string from the retrieved documents StringBuilder informationBuilder = new StringBuilder(); for (int i = 0; i < retrievedDocuments.size(); i++) { Document doc = retrievedDocuments.get(i); String listingUrl = doc.getString("listing_url"); String summary = doc.getString("summary"); informationBuilder.append("Listing URL: ").append(listingUrl) .append("\nSummary: ").append(summary) .append("\n\n"); } String information = informationBuilder.toString(); Map<String, Object> variables = new HashMap<>(); variables.put("question", question); variables.put("information", information); // generate and output the response from the chat model Prompt prompt = promptBuilder.apply(variables); AiMessage response = ollamaChatModel.generate(prompt.toUserMessage()).content(); System.out.println("Answer: " + response.text()); // display the filled-in prompt and context information // NOTE: included for demonstration purposes only System.out.println("______________________"); System.out.println("Final Prompt Sent to LLM:"); System.out.println(prompt.text()); System.out.println("______________________"); System.out.println("Number of documents in context: " + retrievedDocuments.size()); } }
Generate a response to a question on your data.
Save and run the file to complete your RAG implementation. The output resembles the following, although your generated response may vary:
Question: Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings. Generating a response from the retrieved documents. This may take a few moments. Answer: Based on the context provided, here are some beach house Airbnb listings that might suit your needs: 1. Lani Beach House - Aloha: This luxurious beach house offers ocean views from all bedrooms and features vaulted ceilings, skylights, granite countertops, stainless steel appliances, and a gourmet kitchen. You can find it at this link: https://www.airbnb.com/rooms/11553333 2. Ocean Living! Secluded Secret Beach!: This spacious 4-bedroom, 4-bath beach house is perfect for families or groups and is less than 20 steps from the ocean. It's located in a gated beachfront estate with lots of space for activities. You can find it at this link: https://www.airbnb.com/rooms/10317142 3. A beautiful and comfortable 1-Bedroom Condo in Makaha Valley: This condo offers stunning ocean and mountain views, a full kitchen, large bathroom, and is suited for longer stays. The famous Makaha Surfing Beach is not even a mile away. You can find it at this link: https://www.airbnb.com/rooms/10266175 4. There are 2 bedrooms and a living room in the house: This listing does not provide much information about the beach, but it mentions that the house is close to the sea side and historical places. You can find it at this link: https://www.airbnb.com/rooms/10488837 5. The Apartment on Copacabana beach block: This apartment is well-located, a 5-minute walk from Ipanema beach, and offers all the amenities of home, including a kitchen, washing machine, and several utensils for use. You can find it at this link: https://www.airbnb.com/rooms/10038496 ______________________ Final Prompt Sent to LLM: Use the following pieces of context to answer the question at the end: Listing URL: https://www.airbnb.com/rooms/11553333 Summary: "Lani Beach House" Aloha - Please do not reserve until reading about the State Tax in "Other Things to Note" section. Please do not reserve unless you agree to pay taxes to Hawaii Beach Homes directly. If you have questions, please inquire before booking. The home has been completely redecorated in a luxurious island style: vaulted ceilings, skylights, granite counter tops, stainless steel appliances and a gourmet kitchen are just some of the the features. All bedrooms have ocean views Listing URL: https://www.airbnb.com/rooms/10317142 Summary: Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities. Listing URL: https://www.airbnb.com/rooms/10266175 Summary: A beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views All the amenities of home, suited for longer stays. Full kitchen & large bathroom. Several gas BBQ's for all guests to use & a large heated pool surrounded by reclining chairs to sunbathe. The Ocean you see in the pictures is not even a mile away, known as the famous Makaha Surfing Beach. Golfing, hiking,snorkeling paddle boarding, surfing are all just minutes from the front door. Listing URL: https://www.airbnb.com/rooms/10488837 Summary: There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places. Listing URL: https://www.airbnb.com/rooms/10038496 Summary: The Apartment has a living room, toilet, bedroom (suite) and American kitchen. Well located, on the Copacabana beach block a 05 Min. walk from Ipanema beach (Arpoador). Internet wifi, cable tv, air conditioning in the bedroom, ceiling fans in the bedroom and living room, kitchen with microwave, cooker, Blender, dishes, cutlery and service area with fridge, washing machine, clothesline for drying clothes and closet with several utensils for use. The property boasts 45 m2. --------------- Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings. ______________________ Number of documents in context: 5
This section demonstrates a sample RAG implementation that you can run locally using Atlas Vector Search and GPT4All.
Query the database for relevant documents.
Create a file called retrieve-documents.js
and paste the following
code into it:
import { MongoClient } from 'mongodb'; import { getEmbeddings } from './get-embeddings.js'; // Function to get the results of a vector query export async function getQueryResults(query) { // Connect to your Atlas cluster const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING); try { // Get embeddings for a query const queryEmbeddings = await getEmbeddings(query); await client.connect(); const db = client.db("sample_airbnb"); const collection = db.collection("listingsAndReviews"); const pipeline = [ { $vectorSearch: { index: "vector_index", queryVector: queryEmbeddings, path: "embeddings", exact: true, limit: 5 } }, { $project: { _id: 0, summary: 1, listing_url: 1, score: { $meta: "vectorSearchScore" } } } ]; // Retrieve documents from Atlas using this Vector Search query const result = collection.aggregate(pipeline); const arrayOfQueryDocs = []; for await (const doc of result) { arrayOfQueryDocs.push(doc); } return arrayOfQueryDocs; } catch (err) { console.log(err.stack); } finally { await client.close(); } }
This code performs a vector query on your local Atlas deployment or your Atlas cluster.
Run a test query to confirm you're getting the expected results. Create
a new file called test-query.js
, and paste the following code into it:
Run the following code to execute the query:
node --env-file=.env test-query.js
{ listing_url: 'https://www.airbnb.com/rooms/10317142', summary: 'Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities.', score: 0.8703486323356628 } { listing_url: 'https://www.airbnb.com/rooms/10488837', summary: 'There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places.', score: 0.861828088760376 } { listing_url: 'https://www.airbnb.com/rooms/11719579', summary: 'This is a gorgeous home just off the main rd, with lots of sun and new amenities. room has own entrance with small deck, close proximity to the beach , bus to the junction , around the corner form all the cafes, bars and restaurants (2 mins).', score: 0.8616757392883301 } { listing_url: 'https://www.airbnb.com/rooms/12657285', summary: 'This favourite home offers a huge balcony, lots of space, easy life, all the comfort you need and a fantastic location! The beach is only 3 minutes away. Metro is 2 blocks away (starting august 2016).', score: 0.8583258986473083 } { listing_url: 'https://www.airbnb.com/rooms/10985735', summary: '5 minutes to seaside where you can swim, and 5 minutes to the woods, this two floors single house contains a cultivated garden with fruit trees, two large bedrooms and a big living room with a large sea view.', score: 0.8573609590530396 }
Download the local LLM and model information mapping.
Click the following button to download the Mistral 7B model from GPT4All. To explore other models, refer to the GPT4All website.
DownloadMove this model into your
local-rag-mongodb
project directory.In your project directory, download the file that contains the model information.
curl -L https://gpt4all.io/models/models3.json -o ./models3.json
Answer questions on your data.
Create a file called local-llm.js
and paste the following code:
import { loadModel, createCompletionStream } from "gpt4all"; import { getQueryResults } from './retrieve-documents.js'; async function run() { try { const query = "beach house"; const documents = await getQueryResults(query); let textDocuments = ""; documents.forEach(doc => { const summary = doc.summary; const link = doc.listing_url; const string = `Summary: ${summary} Link: ${link}. \n` textDocuments += string; }); const model = await loadModel( "mistral-7b-openorca.gguf2.Q4_0.gguf", { verbose: true, allowDownload: false, modelConfigFile: "./models3.json" } ); const question = "Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings."; const prompt = `Use the following pieces of context to answer the question at the end. {${textDocuments}} Question: {${question}}`; process.stdout.write("Output: "); const stream = createCompletionStream(model, prompt); stream.tokens.on("data", (data) => { process.stdout.write(data); }); //wait till stream finishes. await stream.result; process.stdout.write("\n"); model.dispose(); console.log("\n Source documents: \n"); console.log(textDocuments); } catch (err) { console.log(err.stack); } } run().catch(console.dir);
This code does the following:
Creates an embedding for your query string.
Queries for relevant documents.
Prompts the LLM and returns the response. The generated response might vary.
Run the following code to complete your RAG implementation:
node --env-file=.env local-llm.js
Found mistral-7b-openorca.gguf2.Q4_0.gguf at /Users/dachary.carey/.cache/gpt4all/mistral-7b-openorca.gguf2.Q4_0.gguf Creating LLModel: { llmOptions: { model_name: 'mistral-7b-openorca.gguf2.Q4_0.gguf', model_path: '/Users/dachary.carey/.cache/gpt4all', library_path: '/Users/dachary.carey/temp/local-rag-mongodb/node_modules/gpt4all/runtimes/darwin/native;/Users/dachary.carey/temp/local-rag-mongodb', device: 'cpu', nCtx: 2048, ngl: 100 }, modelConfig: { systemPrompt: '<|im_start|>system\n' + 'You are MistralOrca, a large language model trained by Alignment Lab AI.\n' + '<|im_end|>', promptTemplate: '<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n%2<|im_end|>\n', order: 'e', md5sum: 'f692417a22405d80573ac10cb0cd6c6a', name: 'Mistral OpenOrca', filename: 'mistral-7b-openorca.gguf2.Q4_0.gguf', filesize: '4108928128', requires: '2.7.1', ramrequired: '8', parameters: '7 billion', quant: 'q4_0', type: 'Mistral', description: '<strong>Strong overall fast chat model</strong><br><ul><li>Fast responses</li><li>Chat based model</li><li>Trained by Mistral AI<li>Finetuned on OpenOrca dataset curated via <a href="https://atlas.nomic.ai/">Nomic Atlas</a><li>Licensed for commercial use</ul>', url: 'https://gpt4all.io/models/gguf/mistral-7b-openorca.gguf2.Q4_0.gguf', path: '/Users/dachary.carey/.cache/gpt4all/mistral-7b-openorca.gguf2.Q4_0.gguf' } } Output: Yes, here are a few AirBnB beach houses with links to the listings: 1. Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! - https://www.airbnb.com/rooms/10317142 2. 2 Bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places - https://www.airbnb.com/rooms/10488837 3. Gorgeous home just off the main rd, with lots of sun and new amenities. Room has own entrance with small deck, close proximity to the beach - https://www.airbnb.com/rooms/11719579 4. This favourite home offers a huge balcony, lots of space, easy life, all the comfort you need and a fantastic location! The beach is only 3 minutes away. Metro is 2 blocks away (starting august 2016) - https://www.airbnb.com/rooms/12657285 5. 5 minutes to seaside where you can swim, and 5 minutes to the woods, this two floors single house contains a cultivated garden with fruit trees, two large bedrooms and a big living room with a large sea view - https://www.airbnb.com/rooms/10985735 Source documents: Summary: Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities. Link: https://www.airbnb.com/rooms/10317142. Summary: There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places. Link: https://www.airbnb.com/rooms/10488837. Summary: This is a gorgeous home just off the main rd, with lots of sun and new amenities. room has own entrance with small deck, close proximity to the beach , bus to the junction , around the corner form all the cafes, bars and restaurants (2 mins). Link: https://www.airbnb.com/rooms/11719579. Summary: This favourite home offers a huge balcony, lots of space, easy life, all the comfort you need and a fantastic location! The beach is only 3 minutes away. Metro is 2 blocks away (starting august 2016). Link: https://www.airbnb.com/rooms/12657285. Summary: 5 minutes to seaside where you can swim, and 5 minutes to the woods, this two floors single house contains a cultivated garden with fruit trees, two large bedrooms and a big living room with a large sea view. Link: https://www.airbnb.com/rooms/10985735.
This section demonstrates a sample RAG implementation that you can run locally using Atlas Vector Search and GPT4All.
In your notebook, run the following code snippets:
Use Atlas Vector Search to retrieve relevant documents.
In this step, you create a retrieval function called
get_query_results
that runs a sample vector search query.
It uses the get_embedding
function to create embeddings from the
search query. Then, it runs the query to return semantically similar
documents.
To learn more, see Run Vector Search Queries.
# Function to get the results of a vector search query def get_query_results(query): query_embedding = get_embedding(query) pipeline = [ { "$vectorSearch": { "index": "vector_index", "queryVector": query_embedding, "path": "embeddings", "exact": True, "limit": 5 } }, { "$project": { "_id": 0, "summary": 1, "listing_url": 1, "score": { "$meta": "vectorSearchScore" } } } ] results = collection.aggregate(pipeline) array_of_results = [] for doc in results: array_of_results.append(doc) return array_of_results
To check that the function returns relevant documents,
run the following code to query for the search term beach house
:
Note
Your output might vary since environmental differences can introduce slight variations to your embeddings.
import pprint pprint.pprint(get_query_results("beach house"))
[{'listing_url': 'https://www.airbnb.com/rooms/10317142', 'score': 0.84868323802948, 'summary': 'Ocean Living! Secluded Secret Beach! Less than 20 steps to the ' 'Ocean! This spacious 4 Bedroom and 4 Bath house has all you need ' 'for your family or group. Perfect for Family Vacations and ' 'executive retreats. We are in a gated beachfront estate, with ' 'lots of space for your activities.'}, {'listing_url': 'https://www.airbnb.com/rooms/10488837', 'score': 0.8457906246185303, 'summary': 'There are 2 bedrooms and a living room in the house. 1 Bathroom. ' '1 Kitchen. Friendly neighbourhood. Close to sea side and ' 'Historical places.'}, {'listing_url': 'https://www.airbnb.com/rooms/10423504', 'score': 0.830578088760376, 'summary': 'This peaceful house in North Bondi is 300m to the beach and a ' "minute's walk to cafes and bars. With 3 bedrooms, (can sleep up " 'to 8) it is perfect for families, friends and pets. The kitchen ' 'was recently renovated and a new lounge and chairs installed. ' 'The house has a peaceful, airy, laidback vibe - a perfect beach ' 'retreat. Longer-term bookings encouraged. Parking for one car. A ' 'parking permit for a second car can also be obtained on ' 'request.'}, {'listing_url': 'https://www.airbnb.com/rooms/10548991', 'score': 0.8174338340759277, 'summary': 'Newly furnished two story home. The upstairs features a full ' ... {'listing_url': 'https://www.airbnb.com/rooms/10186755', 'score': 0.8083034157752991, 'summary': 'Near to underground metro station. Walking distance to seaside. ' '2 floors 1 entry. Husband, wife, girl and boy is living.'}]
Load the local LLM.
Click the following button to download the Mistral 7B model from GPT4All. To explore other models, refer to the GPT4All website.
DownloadMove this model into your
local-rag-mongodb
project directory.In your notebook, run the following code to load the local LLM.
from gpt4all import GPT4All local_llm_path = "./mistral-7b-openorca.gguf2.Q4_0.gguf" local_llm = GPT4All(local_llm_path)
Answer questions on your data.
Run the following code to complete your RAG implementation. This code does the following:
Queries your collection for relevant documents by using the function you just defined.
Prompts the LLM using the retrieved documents as context. The generated response might vary.
question = "Can you recommend a few AirBnBs that are beach houses? Include a link to the listing." documents = get_query_results(question) text_documents = "" for doc in documents: summary = doc.get("summary", "") link = doc.get("listing_url", "") string = f"Summary: {summary} Link: {link}. \n" text_documents += string prompt = f"""Use the following pieces of context to answer the question at the end. {text_documents} Question: {question} """ response = local_llm.generate(prompt) cleaned_response = response.replace('\\n', '\n') print(cleaned_response)
Answer: Yes, I can recommend a few AirBnB listings that are beach houses. Here they are with their respective links: 1. Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! (https://www.airbnb.com/rooms/10317142) 2. Beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views (https://www.airbnb.com/rooms/10266175) 3. Peaceful house in North Bondi, close to the beach and cafes (https://www.airbnb.com/rooms/10423504)