Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Build an E-commerce Search Using MongoDB Vector Search and OpenAI

Ashiq Sultan11 min read • Published Mar 11, 2024 • Updated Mar 12, 2024
AIAtlas
FULL APPLICATION
Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty

Introduction

In this article, we will build a product search system using MongoDB Vector Search and OpenAI APIs. We will build a search API endpoint that receives natural language queries and delivers relevant products as results in JSON format. In this article, we will see how to generate vector embeddings using the OpenAI embedding model, store them in MongoDB, and query the same using Vector Search. We will also see how to use the OpenAI text generation model to classify user search inputs and build our DB query.
The API server is built using Node.js and Express. We will be building API endpoints for creating, updating, and searching. Also note that this guide focuses only on the back end and to facilitate testing, we will be using Postman. Relevant screenshots will be provided in the respective sections for clarity. The below GIF shows a glimpse of what we will be building.
demonstration of a search request with natural language as input returns relevant products as output

High-level design

Below, you'll find a high-level design for product creation and search functionality. Please don't feel overwhelmed, as we have provided explanations for each section to help you understand the process.
high-level design for create operation
high-level design for search operation

Project setup

  1. Clone the GitHub repository.
1git clone https://github.com/ashiqsultan/mongodb-vector-openai.git
2. Create a .env file in the root directory of the project.
1touch .env
3. Create two variables in your .env file: MONGODB_URI and OPENAI_API_KEY.
You can follow the steps provided in the OpenAI docs to get the API key.
1echo "MONGODB_URI=your_mongodb_uri" >> .env
2echo "OPENAI_API_KEY=your_openai_api_key" >> .env
4. Install node modules.
1npm install # (or) yarn install
5. Run yarn run dev or npm run dev to start the server.
1npm run dev # (or) yarn run dev
If the MONGODB_URI is correct, it should connect without any error and start the server at port 5000. For the OpenAI API key, you need to create a new account.
terminal output if server starts successfully

Connecting to DB

Connecting to MongoDB Atlas from Node.js should be fairly simple. You can get the connection string by referring to the docs page. Once you have the connection string, just paste it in the .env file as MONGODB_URI. In our codebase, we have created a separate dbclient.ts file which exports a singleton function to connect with MongoDB. Now, we can call this function at the entry point file of our application like below.
1// server.ts
2import dbClient from './dbClient';
3server.listen(app.get('port'), async () => {
4 try {
5 await dbClient();
6 } catch (error) {
7 console.error(error);
8 }
9});

Collection schema overview

You can refer to the schema model file in the codebase. We will keep the collection schema simple. Each product item will maintain the interface shown below.
1interface IProducts {
2 name: string;
3 category: string;
4 description: string;
5 price: number;
6 embedding: number[];
7}
This interface is self-explanatory, with properties such as name, category, description, and price, representing typical attributes of a product. The unique addition is the embedding property, which will be explained in subsequent sections. This straightforward schema provides a foundation for organizing and storing product data efficiently.

Setting up vector index for collection

To enable semantic search in our MongoDB collection, we need to set up vector indexes. If that sounds fancy, in simpler terms, this allows us to query the collection using natural language.
Follow the step-by-step procedure outlined in the documentation to create a vector index from the Atlas UI.
Below is the config we need to provide in the JSON editor when creating the vector index.
1{
2 "mappings": {
3 "dynamic": true,
4 "fields": {
5 "embedding": {
6 "dimensions": 1536,
7 "similarity": "euclidean",
8 "type": "knnVector"
9 }
10 }
11 }
12}
For those who prefer visual guides, watch our video explaining the process.
The key variables in the index configuration are the field name in the collection to be indexed (here, it's called embedding) and the dimensions value (here, set to 1536). The significance of this value will be discussed in the next section.
Creating vector index from atlas ui for product collection

Embeddings in short

An embedding model allows us to transform text into vectors. The vector returned by the embedding model is simply an array of floating-point numbers. This is reflected in our collection interface, where we've defined the type for the embedding field as number[].
For this article, we will use the OpenAI embedding model, which defaults to the returning vectors of size 1536. This number is what we used as the dimensions value when we created the vector index in the previous section. Learn more about embedding models.

Generating embedding using OpenAI

We have created a reusable util function in our codebase which will take a string as an input and return a vector embedding as output. This function can be used in places where we need to call the OpenAI embedding model.
1async function generateEmbedding(inputText: string): Promise<number[] | null> {
2 try {
3 const vectorEmbedding = await openai.embeddings.create({
4 input: inputText,
5 model: 'text-embedding-ada-002',
6 });
7 const embedding = vectorEmbedding.data[0].embedding;
8 return embedding;
9 } catch (error) {
10 console.error('Error generating embedding:', error);
11 return null;
12 }
13}
The function is fairly straightforward. The specific model employed in our example is text-embedding-ada-002. However, you have the flexibility to choose other embedding models but it's crucial to ensure that the output dimensions of the selected model match the dimensions we have set when initially creating the vector index.

What should we embed for Vector Search?

Now that we know what an embedding is, let's discuss what to embed. For semantic search, you should embed all the fields that you intend to query. This includes any relevant information or features that you want to use as search criteria. In our product example, we will be embedding the name of the product, its category, and its description.

Embed on create

To create a new product item, we need to make a POST call to “localhost:5000/product/” with the required properties {name, category, description, price}. This will call the createOne service which handles the creation of a new product item.
1// Example Product item
2// product = {
3// name: 'foo phone',
4// category: Electronics,
5// description: 'This phone has good camera',
6// price: 150,
7// };
8
9const toEmbed = {
10 name: product.name,
11 category: product.category,
12 description: product.description,
13};
14
15// Generate Embedding
16const embedding = await generateEmbedding(JSON.stringify(toEmbed));
17const documentToInsert = {
18…product,
19embedding,
20}
21
22await productCollection.insertOne(documentToInsert);
In the code snippet above, we first create an object named toEmbed containing the fields intended for embedding. This object is then converted to a stringified JSON and passed to the generateEmbedding function. As discussed in the previous section, generateEmbedding will call the OpenAPI embedding model and return us the required embedding array. Once we have the embedding, the new product document is created using the insertOne function. The below screenshot shows the create request and its response.
Postman screenshot of create request with response
And from our MongoDB Atlas UI, we should be able to see the inserted document.
screenshot of created data from MongoDB Atlas

Embed on update

To ensure our search works as expected on data updates, we must generate embeddings upon modification of product records. To update a product, we can make a PATCH request to “localhost:5000/product/” where id is the MongoDB document id. This will call the updateOne.ts service.
Let's make a PATCH request to update the name of the phone from “foo phone” to “Super Phone.”
1// updateObj contains the extracted request body with updated data
2const updateObj = {
3 name: “Super Phone"
4};
5
6const product = await collection.findOne({ _id });
7
8const objToEmbed = {
9 name: updateObj.name || product.name,
10 category: updateObj.category || product.category,
11 description: updateObj.description || product.description,
12};
13
14const embedding = await generateEmbedding(JSON.stringify(objToEmbed));
15
16updateObj.embedding = embedding;
17
18const updatedDoc = await collection.findOneAndUpdate(
19 { _id },
20 { $set: updateObj },
21 {
22 returnDocument: 'after',
23 projection: { embedding: 0 },
24 }
25);
In the above code, the variable updateObj contains the PATCH request body data. Here, we are only updating the name. Then, we use findOne to get the existing product item. The objToEmbed object is constructed to determine which fields to embed in the document. It incorporates both the new values from updateObj and the existing values from the product document, ensuring that any unchanged fields are retained.
In simple terms, we are re-generating the embedding array with the updated data with the same set of fields we used on the creation of the document. This is important to ensure that our search function works correctly and that the updated document stays relevant to its context.
screenshot of update request and response from Postman
screenshot from MongoDB Atlas of the updated document

Search with OpenAI

This section is the core part of the article. Here, we will look into prompt definition and output parsing. We will also look into how to use the MongoDB aggregation pipeline to filter non-embedded values.
In order to execute a search, we need to make a GET request to “localhost:5000/product” with ?search query param.
1http://localhost:5000/product?search=phones with good camera under 160 dollars
screenshot of product search request
The product search request calls the search products service file. Let’s look at the search product function step by step.
1const searchProducts = async (searchText: string): Promise&lt;IProductDocument[]> => {
2 try {
3 const embedding = await generateEmbedding(searchText); // Generate Embedding
4 const gptResponse = (await searchAssistant(searchText)) as IGptResponse;
5 …
In the first line, we are creating embedding using the same generateEmbedding function we used for create and update. Let’s park this for now and focus on the second function, searchAssistant.

Search assistant function

This is a reusable function that is responsible for calling the OpenAI completion model. You can find the searchAssistant file on GitHub. It's here we have described the prompt for the generative model with output instructions.
1async function main(userMessage: string): Promise&lt;any> {
2 const completion = await openai.chat.completions.create({
3 messages: [
4 {
5 role: 'system',
6 content: `You are an e-commerce search assistant. Follow the below list of instructions for generating the response.
7 - You should only output JSON strictly following the Output Format Instructions.
8 - List of Categories: Books, Clothing, Electronics, Home & Kitchen, Sports & Outdoors.
9 - Identify whether user message matches any category from the List of Categories else it should be empty string. Do not invent category outside the provided list.
10 - Identify price range from user message. minPrice and maxPrice must only be number or null.
11 - Output Format Instructions for JSON: { category: 'Only one category', minPrice: 'Minimum price if applicable else null', maxPrice: 'Maximum Price if applicable else null' }
12 `,
13
14 },
15 { role: 'user', content: userMessage },
16 ],
17 model: 'gpt-3.5-turbo-1106',
18 response_format: { type: 'json_object' },
19 });
20
21 const outputJson = JSON.parse(completion.choices[0].message.content);
22
23 return outputJson;
24}

Prompt explanation

You can refer to the Open AI Chat Completion docs to understand the function definition. Here, we will explain the system prompt. This is the place where we give some context to the model.
  • First, we tell the model about its role and instruct it to follow the set of rules we are about to define.
  • We explicitly instruct it to output only JSON following the “Output Instruction” we have provided within the prompt.
  • Next, we provide a list of categories to classify the user request. This is hardcoded here but in a real-time scenario, we might generate a category list from DB.
  • Next, we are instructing it to identify if users have mentioned any price so that we can use that in our aggregation query.
Let’s add some console logs before the return statement and test the function.
1// … Existing code
2const outputJson = JSON.parse(completion.choices[0].message.content);
3console.log({ userMessage });
4console.log({ outputJson });
5return outputJson;
With the console logs in place, make a GET request to /products with search query param. Example:
1// Request
2http://localhost:5000/product?search=phones with good camera under 160 dollars
3
4// Console logs from terminal
5{ userMessage: 'phones with good camera under 160 dollars' }
6{ outputJson: { category: 'Electronics', minPrice: null, maxPrice: 160 } }
From the OpenAI response above, we can see that the model has classified the user message under the “Electronics” category and identified the price range. It has followed our output instructions, as well, and returned the JSON we desired. Now, let’s use this output and structure our aggregation pipeline.

Aggregation pipeline

In our searchProducts file, right after we get the gptResponse, we are calling a function called constructMatch. The purpose of this function is to construct the $match stage query object using the output we received from the GPT model — i.e., it will extract the category and min and max prices from the GPT response to generate the query.
Example
Let’s do a search that includes a price range: “?search=show me some good programming books between 100 to 150 dollars”.
console logs of GPT response and match query
From the above image, we can find how our GPT model was able to recognize the price range, and our match stage query has those values reflected.
Once we have the match query, we will move forward with the aggregation pipeline.
1const aggCursor = collection.aggregate&lt;IProductDocument>([
2 {
3 $vectorSearch: {
4 index: VECTOR_INDEX_NAME,
5 path: 'embedding',
6 queryVector: embedding,
7 numCandidates: 150,
8 limit: 10,
9 },
10 },
11 matchStage,
12 {
13 $project: {
14 _id: 1,
15 name: 1,
16 category: 1,
17 description: 1,
18 price: 1,
19 score: { $meta: 'vectorSearchScore' },
20 },
21 },
22 ]);
The first stage in our pipeline is the $vector-search-stage.
  • index: refers to the vector index name we provided when initially creating the index under the section **Setting up vector index for collection (mynewvectorindex). **
  • path: the field name in our document that holds the vector values — in our case, the field name itself is **embedding. **
  • **queryVector: **the embedded format of the search text. We have generated the embedding for the user’s search text using the same generateEmebdding function, and its value is added here.
  • **numCandidates: **Number of nearest neighbors to use during the search. The value must be less than or equal to (<=) 10000. You can't specify a number less than the number of documents to return (limit).
  • **Limit: **number of docs to return in the result.
Please refer to the vector search fields docs for more information regarding these fields. You can adjust the numCandidates and limit based on requirements.
The second stage is the match stage which just contains the query object we generated using the constructMatch function, as explained previously.
The third stage is the $project stage which only deals with what to show and how to show it. Here, you can omit the fields you don’t wish to return.

Demonstration

Let’s see our search functionality in action. To do this, we will create a new product and make a search with related keywords. Later, we will update the same product and do a search with keywords matching the updated document.
We can create a new book using our POST request.
Book 01
1{"name": "JavaScript 101",
2 "category": "Books",
3 "description": "This is a good book for learning JavaScript for beginners. It covers fundamental concepts such as variables, data types, operators, control flow, functions, and more.",
4 "price": 60
5}
The below GIF shows how we can create a book from Postman and view the created book in MongoDB Atlas UI by filtering the category with Books.
GIF showing creation of book from postman and viewing the same in MongoDB Atlas
Let’s create two more books using the same POST request so that we have some data for testing.
Book 2
1{"name": "Go lang Essentials",
2 "category": "Books",
3 "description": "A comprehensive guide to learning the Go programming language for beginners. This book is perfect for anyone looking to dive into Go programming.",
4 "price": 70}
Book 3
1{"name": "Cracking the Coding Interview",
2 "category": "Books",
3 "description": "This book is a comprehensive guide to preparing for coding interviews, offering practice questions and solutions.",
4 "price": 80}
After inserting, we should have at least three documents under the category Books.
List of inserted books in MongoDB Atlas UI
Let’s search for Javascript Books using the search term, “I want to learn JavaScript.”
Search API call with search text I want to learn Javascript
Now, let’s search, “I’m preparing for coding interview.”
Search API call with search text I’m preparing for coding interview
As we can see from the two screenshots, our search algorithm is able to respond with books related to coding even though we haven’t explicitly mentioned we are looking for books. Also, the books are ordered based on the search intent. Pay attention to the score field in our response data. When we searched “I want to learn JS,” we got the JavaScript 101 book at the top, and when we searched, “I'm preparing for coding interview,” the book **Cracking the Coding Interview **came on top. Learn more about vector search scores.
If you wonder why we see all the books in our response, this is due to our limited sample data of three books. However, in real-world scenarios, if more relevant items are available in DB, then based on the search term, they will have higher scores and be prioritized.
Let’s update something in our books using our PATCH request. Here, we will update our JavaScript 101 book to a Python book using its document _id.
Patch request to update the JavaScript book to Python book with response
Now, our collection should look like the below under the Books category.
Book list in Atlas UI showing Javascript book has been renamed to python book
We can see that our JavaScript 101 book has been changed to a Python book. Now, let's search for the Python book using the search term “Python for beginners.”
Search API call with search text Python for beginners
From the screenshot above, we can see that our search works as expected. This is possible because we are embedding our data both on create and update.

Conclusion

In conclusion, it's important to note that this article presents a high-level design for creating a semantic search utilizing MongoDB Vector Search and OpenAI models. This could serve as a starting point for developers looking to build a similar semantic search solution. Make sure to check the Vector Search docs for more details. Thanks for reading.
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

Getting Started with MongoDB Atlas and Ruby on Rails


Dec 11, 2023 | 6 min read
Tutorial

How to Build an Animated Timeline Chart with the MongoDB Charts Embedding SDK


Dec 13, 2023 | 6 min read
Tutorial

CIDR Subnet Selection for MongoDB Atlas


Sep 23, 2022 | 2 min read
Tutorial

How to Use Custom Archival Rules and Partitioning on MongoDB Atlas Online Archive


May 31, 2023 | 5 min read
Table of Contents