EventJoin us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases. Learn more >>Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases. >>

Solutions

Claim management using LLMs and vector search for RAG

Discover how to combine Atlas Vector Search and large language models (LLMs) to streamline the claim adjustment process.
Start FreeView the demo
Claim management using LLMs and vector search for RAG illustration
This is an image
Solution overview
One of the biggest challenges for claim adjusters is pulling and aggregating information from disparate systems and diverse data formats. PDFs of policy guidelines might be stored in a content-sharing platform, customer information locked in a legacy CRM, and claim-related pictures and voice reports in yet another tool. All of this data is not just fragmented across siloed sources and hard to find but also in formats that have been historically nearly impossible to index with traditional methods. Over the years, insurance companies have been accumulating terabytes of unstructured data in their datastores, but failing to capitalize on the possibility of accessing and leveraging it to uncover business insights, deliver better customer experiences, and streamline operations. Some of our customers even admit they’re not fully aware of all of the data that’s truly in their archives. There’s a tremendous opportunity now to leverage all of this unstructured data to the benefit of these organizations and their customers.

Our solution addresses these challenges by combining the power of Altas Vector Search and a Large Language Model (LLM) in a retrieval augmented generation (RAG) system, allowing organizations to go beyond the limitations of baseline foundational models, making them context-aware by feeding them proprietary data. In this way, they can leverage the full potential of AI to streamline operations.

Reference architectures

With MongoDB:

MongoDB Atlas combines transactional and search capabilities in the same platform, providing a unified development experience. As embeddings are stored alongside existing data, when running a vector search query, we get the document containing both the vector embeddings and the associated metadata, eliminating the need to retrieve the data elsewhere. This is a great advantage for developers who don’t need to learn to use and maintain a separate technology and can fully focus on building their apps.

Ultimately, the data obtained from MongoDB Vector Search is fed to the LLM as context.

RAG - Querying flow image
RAG - Querying flow
Data model approach

The “claim” collection contains documents including a number of fields related to the claim. In particular, we are interested in the “claimDescription” field, which we vectorize and add to the document as “claimDescriptionEmbedding.” This embedding is then indexed and used to retrieve documents associated with the user prompt.

Data model approach code block image
Building the solution

The instructions to build the demo are included in the readme of this Github repo. You’ll be guided through the following steps:

  • OpenAI API key setup
  • Atlas connection setup
  • Dataset download
  • LLM configuration options
  • Vector Search index creation

Visit the Atlas Vector Search Quick Start guide to try our semantic search tool now.

Step 4 of this tutorial walks you through the creation and configuration of the Vector Search index within the Atlas UI. Make sure you follow this structure:

Ultimately you have to run both the front and the back end. You’ll access a web UI that allows you to ask questions of the LLM, obtain an answer, and see the reference documents used as context.

Key Learnings
  • Text embedding creation — The embedding generation process can be carried out using different models and deployment options. It is always important to be mindful of privacy and data protection requirements. A locally deployed model is recommended if we need our data to never leave our servers. Otherwise, we can simply call an API and get our vectors back, as explained in this tutorial that tells you how to do it with OpenAI.

  • Creation of a Vector Search index in Atlas — It is now possible to create indexes for local deployments.

  • Performing a Vector Search query — Notably, Vector Search queries have a dedicated operator within MongoDB’s aggregation pipeline. This means they can be concatenated with other operations, making it extremely convenient for developers because they don’t need to learn a different language or change context.

  • Using LangChain as the framework that glues together MongoDB Atlas Vector Search and the LLM, allowing for an easy and fast RAG implementation.

Authors
  • Luca Napoli, Industry Solutions, MongoDB
  • Jeff Needham, Industry Solutions, MongoDB
Related resources
general_content_developer

GitHub Repository: RAG-Insurance

Implement this demo by following the instructions and associated models in this solution’s repository.

industry_ai

Resources to Build AI-powered Apps

Get full access to our library of articles, analyst reports, case studies, white papers, and more.

mdb_vector_search

AI, Vectors, and the Future of Claims Processing

Discover why insurers need to understand the power of vector databases.

general_industry_insurance

MongoDB for Insurance

Modernize, move to any cloud, and embrace the AI-driven future of insurance.

Get started with Atlas today

Get started in seconds. Our free clusters come with 512 MB of storage so you can experiment with sample data and get familiar with our platform.
Try FreeContact sales
Illustration of hands typing on a laptop in the foreground and a superimposed desktop window and coffee cup in the background.