Docs Menu
Docs Home
/
MongoDB Atlas
/ /

Get Started with the Semantic Kernel C# Integration

On this page

  • Background
  • Prerequisites
  • Set Up the Environment
  • Store Custom Data in Atlas
  • Create the Atlas Vector Search Index
  • Run Vector Search Queries
  • Answer Questions on Your Data
  • Next Steps

Note

This tutorial uses the Semantic Kernel C# library. For a tutorial that uses the Python library see Get Started with the Semantic Kernel Python Integration.

You can integrate Atlas Vector Search with Microsoft Semantic Kernel to build AI applications and implement retrieval-augmented generation (RAG). This tutorial demonstrates how to start using Atlas Vector Search with Semantic Kernel to perform semantic search on your data and build a RAG implementation. Specifically, you perform the following actions:

  1. Set up the environment.

  2. Store custom data on Atlas.

  3. Create an Atlas Vector Search index on your data.

  4. Run a semantic search query on your data.

  5. Implement RAG by using Atlas Vector Search to answer questions on your data.

Semantic Kernel is an open-source SDK that allows you to combine various AI services and plugins with your applications. You can use Semantic Kernel for a variety of AI use cases, including RAG.

By integrating Atlas Vector Search with Semantic Kernel, you can use Atlas as a vector database and use Atlas Vector Search to implement RAG by retrieving semantically similar documents from your data. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with Atlas Vector Search.

To complete this tutorial, you must have the following:

  • An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later (including RCs). Ensure that your IP address is included in your Atlas project's access list.

  • An OpenAI API Key. You must have a paid OpenAI account with credits available for API requests.

  • A terminal and code editor to run your .NET application.

  • C#/.NET installed.

You must first set up the environment for this tutorial. To set up your environment, complete the following steps.

1

Run the following commands in your terminal to create a new directory named sk-mongodb and initialize your application:

mkdir sk-mongodb
cd sk-mongodb
dotnet new console
2

In your terminal, run the following commands to install the packages for this tutorial.

dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Connectors.MongoDB --prerelease
dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI
dotnet add package Microsoft.SemanticKernel.Memory
dotnet add package Microsoft.SemanticKernel.Plugins.Memory --prerelease
3

In your terminal, run the following commands to add your Atlas cluster's SRV connection string and OpenAI API Key to your environment.

export OPENAI_API_KEY="<Your OpenAI API Key>"
export ATLAS_CONNECTION_STRING="<Your MongoDB Atlas SRV Connection String>"

Note

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

In this section, you initialize the kernel, which is the main interface used to manage your application's services and plugins. Through the kernel, you configure your AI services, instantiate Atlas as a vector database (also called a memory store), and load custom data into your Atlas cluster.

Copy and paste the following code into your application's Program.cs file.

This code performs the following actions:

  • Imports Semantic Kernel and all the required packages.

  • Connects to your Atlas cluster by retrieving your SRV connection string from the environment.

  • Retrieves your OpenAI API key from the environment and creates an instance of OpenAI's text-embedding-ada-002 embedding model.

  • Initializes the Kernel then adds the following AI service to the Kernel:

  • Instantiates Atlas as a memory store and specifies the following parameters:

    • semantic_kernel_db.test as the collection to store the documents.

    • vector_index as the index to use for querying the memory store.

  • Initializes a class called SemanticTextMemory, which provides a group of native methods to help you store and retrieve text in memory.

  • Populates the semantic_kernel_db.test collection with sample documents by calling the PopulateMemoryAsync method.

// Import Packages
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.MongoDB;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Memory;
using Microsoft.SemanticKernel.Plugins.Memory;
# pragma warning disable SKEXP0010, SKEXP0020, SKEXP0001, SKEXP0050
class Program {
static async Task Main(string[] args) {
// Get connection string and OpenAI API Key
var connectionString = Environment.GetEnvironmentVariable("ATLAS_CONNECTION_STRING");
if (connectionString == null)
{
Console.WriteLine("You must set your 'ATLAS_CONNECTION_STRING' environment variable.");
Environment.Exit(0);
}
var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
if (openAIKey == null)
{
Console.WriteLine("You must set your 'OPENAPI_KEY' environment variable.");
Environment.Exit(0);
}
// Create new OpenAI API Embedding Model
var embeddingGenerator = new OpenAITextEmbeddingGenerationService("text-embedding-ada-002", openAIKey);
// Initialize Kernel
IKernelBuilder builder = Kernel.CreateBuilder();
// Add OpenAI Chat Completion to Kernel
builder.AddOpenAIChatCompletion(
modelId: "gpt-3.5-turbo",
apiKey: openAIKey
);
Kernel kernel = builder.Build();
// Instantiate Atlas as a memory store.
MongoDBMemoryStore memoryStore = new(connectionString, "semantic_kernel_db", indexName: "vector_index");
SemanticTextMemory textMemory = new(memoryStore, embeddingGenerator);
// Populate memory with sample data
async Task PopulateMemoryAsync(Kernel kernel) {
await textMemory.SaveInformationAsync(collection: "test", text: "I am a developer", id: "1");
await textMemory.SaveInformationAsync(collection: "test", text: "I started using MongoDB two years ago", id: "2");
await textMemory.SaveInformationAsync(collection: "test", text: "I'm using MongoDB Vector Search with Semantic Kernel to implement RAG", id: "3");
await textMemory.SaveInformationAsync(collection: "test", text: "I like coffee", id: "4");
}
await PopulateMemoryAsync(kernel);
}
}

Save the file, then run the following command to load your data into Atlas:

dotnet run

Tip

After running the sample code, you can view your vector embeddings in the Atlas UI by navigating to the semantic_kernel_db.test collection in your cluster.

To enable vector search queries on your vector store, create an Atlas Vector Search index on the semantic_kernel_db.test collection.

To create an Atlas Vector Search index, you must have Project Data Access Admin or higher access to the Atlas project.

In your notebook, run the following code to connect to your Atlas cluster and create an index of the vectorSearch type. This index definition specifies indexing the following fields:

  • embedding field as the vector type. The embedding field contains the embeddings created using OpenAI's text-embedding-ada-002 embedding model. The index definition specifies 1536 vector dimensions and measures similarity using cosine.

# Connect to your Atlas cluster and specify the collection
client = MongoClient(ATLAS_CONNECTION_STRING)
collection = client["semantic_kernel_db"]["test"]
# Create your index model, then create the search index
search_index_model = SearchIndexModel(
definition={
"fields": [
{
"type": "vector",
"path": "embedding",
"numDimensions": 1536,
"similarity": "cosine"
}
]
},
name="vector_index",
type="vectorSearch"
)
collection.create_search_index(model=search_index_model)

The index should take about one minute to build. While it builds, the index is in an initial sync state. When it finishes building, you can start querying the data in your collection.

Once Atlas builds your index, you can run vector search queries on your data.

At the end of your Program.cs file, add the following code to perform a basic semantic search for the string What is my job title?. It prints the most relevant document and a relevance score between 0 and 1.

1var results = textMemory.SearchAsync(collection: "test", query: "What is my job title?");
2
3await foreach (var result in results) {
4 Console.WriteLine($"Answer: {result?.Metadata.Text}, {result?.Relevance}");
5}
6Console.WriteLine("Search completed.");

Save the file, then run the following command to see the results of the semantic search:

dotnet run
Answer: I am a developer, 0.8913083076477051
Search completed.

This section shows an example RAG implementation with Atlas Vector Search and Semantic Kernel. Now that you've used Atlas Vector Search to retrieve semantically similar documents, paste the following code example at the end of your Program.cs to prompt the LLM to answer questions based on those documents.

This code performs the following actions:

  • Imports the TextMemoryPlugin class's functions into the kernel's textMemory.

  • Builds a prompt template that uses the recall function from the TextMemoryPlugin class to perform a semantic search over the kernel's textMemory for the string When did I start using MongoDB?.

  • Creates a function named settings from the chat prompt using the kernel's CreateFunctionFromPrompt function.

  • Calls the kernel's InvokeAsync function to generate a response from the chat model using the following parameters:

    • The settings function that configures the prompt template and OpenAIPromptExecutionSettings.

    • The question When did I start using MongoDB? as the value for the {{$input}} variable in the prompt template.

    • semantic_kernel_db.test as the collection to retrieve information from.

  • Prints the question and generated response.

1 kernel.ImportPluginFromObject(new TextMemoryPlugin(textMemory));
2 const string promptTemplate = @"
3 Answer the following question based on the given context.
4 Question: {{$input}}
5 Context: {{recall 'When did I start using MongoDB?'}}
6 ";
7
8 // Create and Invoke function from the prompt template
9 var settings = kernel.CreateFunctionFromPrompt(promptTemplate, new OpenAIPromptExecutionSettings());
10 var ragResults = await kernel.InvokeAsync(settings, new()
11 {
12 [TextMemoryPlugin.InputParam] = "When did I start using MongoDB?",
13 [TextMemoryPlugin.CollectionParam] = "test"
14 });
15
16 // Print RAG Search Results
17 Console.WriteLine("Question: When did I start using MongoDB?");
18 Console.WriteLine($"Answer: {ragResults.GetValue<string>()}");

Save the file, then run the following command to generate a response:

dotnet run
Question: When did I start using MongoDB?
Answer: You started using MongoDB two years ago.

Tip

You can add your own data and replace the following parts of the code to generate responses on a different question:

  • {{recall '<question>'}} ,

  • [TextMemory.InputParam] = "<question>" ,

  • Console.WriteLine("Question: <question>") .

MongoDB also provides the following developer resources:

Tip

See also:

Back

LlamaIndex