dj-walker-morgan

3051 results

Advancing Encryption in MongoDB Atlas

Maintaining a strong security posture and ensuring compliance with regulations and industry standards are core responsibilities of enterprise security teams. However, satisfying these responsibilities is becoming increasingly complex, time-consuming, and high-stakes. The rapid evolution of the threat landscape is a key driver of this challenge. In 2024, the percentage of organizations that experienced a data breach costing $1 million or more jumped from 27% to 36%. 1 This was partly fueled by a 180% surge from 2023 to 2024 in vulnerability exploitation by attackers. 2 Concurrently, regulations are tightening. Laws like the Health Insurance Portability and Accountability Act (HIPAA) 3 and the U.S. Securities and Exchange Commission’s cybersecurity regulations 4 have introduced stricter security requirements. This has raised the bar for compliance. Thousands of enterprises rely on MongoDB Atlas to protect their sensitive data and support compliance efforts. Encryption plays a crucial role on three levels; securing data at rest, in transit, and in use. However, security teams need more than solely strong encryption. Flexibility and control are essential to align with an organization’s specific requirements. MongoDB is introducing significant upgrades to MongoDB Atlas encryption to meet these needs. This includes enhanced customer-managed key (CMK) functionality and support for TLS 1.3. This post explores these improvements, along with the planned deprecation of outdated TLS versions, to strengthen organizations’ security postures. Why customer-managed keys (CMKs) matter Customer-managed keys (CMKs) are a security and data governance feature that delivers enterprises full control over the encryption keys protecting their data. With CMKs, customers can define and manage their encryption strategy. This ensures they have ultimate authority over access to their sensitive information. MongoDB Atlas customer key management provides file-level encryption, similar to transparent data encryption (TDE) in other databases. This customer-managed encryption-at-rest feature works alongside always-on volume-level encryption 5 in MongoDB Atlas. CMKs ensure all database files and backups are encrypted. MongoDB Atlas also integrates with AWS Key Management Service (AWS KMS), Azure Key Vault , and Google Cloud KMS . This ensures customers have the flexibility to manage keys as part of their broader enterprise security strategy. Customers using CMKs retain complete control of their encryption keys. If an organization needs to revoke access to data due to a security concern or any other reason, it can do so immediately by freezing or destroying the encryption keys. This capability acts as a “kill switch,” ensuring sensitive information becomes inaccessible when protection is critical. Similarly, an organization can destroy the keys to render the data and backups permanently unreadable and irretrievable. This may be applicable should they choose to retire a cluster permanently. Announcing CMK over private networking As part of a commitment to deliver secure and flexible solutions for enterprise customers, MongoDB is introducing CMKs over private networking. This enhancement enables organizations to manage their encryption keys without exposing their key management service (KMS) to the public internet. Using CMKs in MongoDB Atlas previously required Azure Key Vault and AWS KMS to be accessible via public IP addresses prior to today. While functional, this posed challenges for customers who need to keep KMS traffic private. It forced those customers to either expose their KMS endpoints or manage IP allow lists. By using private networking, customers can now: Eliminate the need for public IP exposure. Simplify network management by removing the need to manage allowed IP addresses. This reduces administrative effort and misconfiguration risk. Align with organizational requirements that mandate the use of private networking. Customer key management over private networking is now available for Azure Key Vault and AWS KMS . Customers can enable and manage this feature for all their MongoDB Atlas projects through the MongoDB Atlas UI or the MongoDB Atlas Administration API . More enhancements are coming for MongoDB customer key management in 2025. These include secretless authentication mechanisms and CMKs for search nodes. MongoDB Atlas TLS enhancements advance encryption in transit Securing data in transit is equally vital as a foundation of encryption at rest with CMKs. To address this, MongoDB Atlas enforces TLS by default. This ensures encrypted communication across all aspects of the platform, including client connections. Now MongoDB is reinforcing its TLS implementation with key enhancements for enterprise-grade security. MongoDB is in the process of rolling out fleetwide support for TLS 1.3 in MongoDB Atlas. The latest version of the cryptographic protocol offers several advantages over its predecessors. This includes stronger security defaults, faster handshakes, and reduced latency. Concurrently, TLS versions 1.0 and 1.1 are being deprecated. The rationale for this is known weaknesses and their inability to meet modern security standards. MongoDB is aligning with industry best practices by standardizing on TLS 1.2 and 1.3. This ensures a secure communication environment for all MongoDB Atlas users. Additionally, MongoDB now offers custom cipher suite selection, giving enterprises more control over their cryptographic configurations. This feature lets organizations choose the cipher suites for their TLS connections, ensuring compliance with their security requirements. Achieving encryption everywhere This post covers how MongoDB secures data at rest with CMKs and in transit with TLS. However, what about data in use while it’s being processed in a MongoDB Atlas instance? That’s where Queryable Encryption comes in. This groundbreaking feature enables customers to run expressive queries on encrypted data without ever exposing the plaintext or keys outside the client application. Sensitive data and queries never leave the client unencrypted. This ensures sensitive information is protected and inaccessible to anyone without the keys, including database administrators and MongoDB itself. MongoDB is committed to providing enterprise-grade security that evolves with the changing threat and regulatory landscapes. Organizations now have greater control, flexibility, and protection across every stage of the data lifecycle with enhanced CMK functionality, TLS 1.3 adoption, and custom cipher suite selection. As security challenges grow more complex, MongoDB continues to innovate to enable enterprises to safeguard their most sensitive data. To learn more about these encryption enhancements and how they can strengthen your security posture, visit MongoDB Data Encryption . 1 PwC , October 2024 2 Verizon Data Breach Investigations Report , 2024 3 U.S. Department of Health and Human Services , December 2024 4 U.S. Securities and Exchange Commission , 2023 5 MongoDB Atlas Security White Paper , Encryption at Rest section page 12

March 5, 2025

AI-Powered Java Applications With MongoDB and LangChain4j

MongoDB is pleased to introduce its integration with LangChain4j , a popular framework for integrating large language models (LLMs) into Java applications. This collaboration simplifies the integration of MongoDB Atlas Vector Search into Java applications for building AI applications. The advent of generative AI has opened up many new possibilities for developing novel applications. These advancements have led to the development of AI frameworks that simplify the complexities of orchestrating and integrating LLMs and the various components of the AI stack , where MongoDB plays a key role as an operational and vector database. Simplifying AI development for Java The first AI frameworks to emerge were developed for Python and JavaScript, which were favored by early AI developers. However, Java remains widespread in enterprise software. This has led to the development of LangChain4j to address the needs of the Java ecosystem. While largely inspired by LangChain and other popular AI frameworks, LangChain4j is independently developed. As with other LLM frameworks, LangChain4j offers several advantages for developing AI systems and applications by providing: A unified API for integrating LLM providers and vector stores. This enables developers to adopt a modular approach with an interchangeable stack while ensuring a consistent developer experience. Common abstractions for LLM-powered applications, such as prompt templating, chat memory management, and function calling, offering ready-to-use building blocks for common AI applications like retrieval-augmented generation (RAG) and agents. Powering RAG and agentic systems with MongoDB and LangChain4j MongoDB worked with the LangChain4j open-source community to integrate MongoDB Atlas Vector Search into the framework, enabling Java developers to develop AI-powered applications from simple RAG to agentic applications. In practice, this means developers can now use the unified LangChain4j API to store vector embeddings in MongoDB Atlas and use Atlas Vector Search capabilities for retrieving relevant context data. These capabilities are essential for enabling RAG pipelines, where private, often enterprise data is retrieved based on relevancy and combined with the original prompt to get more accurate results in LLM-based applications. LangChain4j supports various levels of RAG, from basic to advanced implementations, making it easy to prototype and experiment before customizing and scaling your solution to your needs. A basic RAG setup with LangChain4j typically involves loading and parsing unstructured data from documents stored locally or on remote services like Amazon S3 or Azure Storage using the Document API. The process then transforms and splits the data, then embeds it to capture the semantic meaning of the content. For more details, check out the documentation on core RAG APIs . However, real-world use cases often demand solutions with advanced RAG and agentic systems. LangChain4j optimizes RAG pipelines with predefined components designed to enhance accuracy, latency, and overall efficiency through techniques like query transformation, routing, content aggregation, and reranking. It also supports AI agent implementation through dedicated APIs, such as AI Services and Tools , with function calling and RAG integration, among others. Learn more about the MongoDB Atlas Vector Search integration in LangChain4j’s documentation . MongoDB’s dedication to providing the best developer experience for building AI applications across different ecosystems remains strong, and this integration reinforces that commitment. We will continue strengthening our integration with LLM frameworks enabling developers to build more-innovative AI applications, agentic systems, and AI agents. Ready to start building AI applications with Java? Learn how to create your first RAG system by visiting our tutorial: How to Make a RAG Application With LangChain4j .

March 4, 2025

MongoDB 8.0: Eating Our Own Dog Food

Key Takeaways We achieve real-world testing by adopting release candidates (RCs) on our internal production systems before finalizing a release. Our diverse internal workloads delivered unique insights. For instance, an internal cluster’s upgrade identified a rare MongoDB server crash and an inefficiency for a specific query shape introduced by a new MongoDB 8.0 feature. Issues encountered while testing MongoDB 8.0 internally were fixed proactively before they went out to customers. For example, during an upgrade to an 8.0 RC, one of our internal databases crashed and the issue was fixed in the next RC. Prerelease testing uncovered gaps in our automated testing, leading to improved coverage with additional tests. Using MongoDB 8.0 internally on mission-critical internal systems demonstrated its reliability. This gave customers confidence that the release could handle their demanding workloads, just as it did for our own engineering teams. Release jitters Every software release, whether it’s a new product or an update of an existing one, comes with an inherent risk: what if users encounter a bug that the development team didn’t anticipate? With a mission-critical product like MongoDB 8.0 , even minor issues can have a significant impact on customer operations, uptime, and business continuity. Unfortunately, no amount of automated testing can guarantee how MongoDB will perform when it lands with customers. So how does MongoDB proactively identify and resolve issues in our software before customers encounter them, thereby ensuring a seamless upgrade experience and maintaining customer trust? Catching issues before you do To address these challenges, we employ a combination of methods to ensure reliability. One approach is to formally model our system to prove the design is correct, such as the effort we undertook to mathematically model our protocols with lightweight formal methods like TLA+. Another method is to prove reliability empirically by dogfooding. Dogfooding (🤨)? Eating your own dog food—aka eating your own pizza, aka “dogfooding”—refers to a development process where you put yourself in customers’ shoes by using your own product in your own production systems. In short: you’re your own customer. Why dogfood? Enhanced product quality: Testing in a controlled environment can’t replicate the edge cases of true-to-life workloads, so real-world scenarios are needed to ensure robustness, reliability, and performance under diverse conditions. Early identification of issues: Testing internally surfaces issues earlier in the release process, enabling fixes to be deployed proactively before customers encounter them. Build customer empathy: Acting as users provides direct insight into customer pain points and needs. Engineers gain firsthand understanding of the challenges of using their product, informing more customer-centric solutions. Without dogfooding, things like upgrades are taken for granted and customer pain points can be overlooked. Boost credibility and trust: Relying on our own software to power critical internal systems reassures customers of its dependability. Dogfooding at MongoDB MongoDB has a strong dogfooding culture. Many internal services are built with MongoDB and hosted on MongoDB Atlas , the very same setup we provide our customers. Eating our own dog food is essential to our customer mindset. Because internal teams work alongside MongoDB engineers, acting as users bridges the gap between MongoDB engineers and their customers. Additionally, real-life workloads vet our software and processes in a way automated testing cannot. Release dogfooding With the release of MongoDB 8.0, the company decided to take dogfooding one step further. Driven by a company-wide focus on making 8.0 the most performant version of MongoDB yet, we embarked on an ambitious plan to dogfood the release candidates within our own infrastructure. Before, our release process looked like this: Figure 1. Releases without real-world testing. We wanted it to look more like this: Figure 2. Releases pregamed on internal clusters. Adding internal testing to the release process allows us to iterate long before we make the product available to customers. Whereas in the past we’d release and fix issues reactively as customers encountered them, using the release internally, before it got into customers’ hands, would uncover edge cases so we could fix them proactively. By acting as our own customers, we remove our real customers from the development cycle and build confidence in the release. The confidence team To tackle upgrades effectively, we assembled a cross-functional team of MongoDB engineers, Atlas SREs, and internal service developers. A technical program manager (TPM) was assigned to the effort to track progress and coordinate efforts across the team. Together, we enumerated the databases, scheduled upgrade dates, and assigned directly responsible individuals (DRIs) to each upgrade. To streamline communication, we created an internal Slack channel and invited everyone on the team to it. We agreed on a playbook: with the support of the team, the assigned DRI would upgrade their cluster and monitor for any issues. If something came up we would create a ticket in an internal Jira project and mention it in Slack for visibility. I took on the role of DRI for Evergreen database upgrades. Evergreen My team maintains the database clusters for Evergreen , MongoDB’s bespoke continuous integration (CI) system. Evergreen is responsible for running automated tests at scale against MongoDB, Atlas, the drivers, Evergreen itself, and many other products. At last count, each day Evergreen executes, in parallel, roughly ten years of tests per day and is on the critical path for many teams at the company. Evergreen runs on two separate clusters in Atlas: the application’s main replica set and a smaller one for our background job coordinator, Amboy . In terms of scale, the main replica set contains around 9.5TB of data and handles 1 billion CRUD operations per day, while the Amboy cluster contains about 1TB of data and handles 100 million CRUD operations per day. Because of Evergreen’s criticality to the development cycle, historically we’ve taken a cautious approach to any operational changes and database upgrades were not a priority. The initiative to dogfood our internal clusters changed our approach—we were going to use 8.0 before it went out to customers. Enabling a feature flag in Atlas made the RC build available in our Atlas project before it was available to customers. A showstopper Our first target was the Amboy cluster, which handles background jobs for Evergreen. I clicked the button to upgrade our Amboy cluster and we held our collective breath. Atlas upgrades are rolling. This means an upgrade is applied iteratively to each secondary in the cluster until finally the primary is stepped down and upgraded. Usually this works well since any issues will at most affect just a secondary, but in our case it didn’t work out. The secondaries’ upgrades succeeded, but when the primary was stepped down, each node that won the election to be the next primary crashed. The result was that our cluster had no primary and the Amboy database was unavailable, which threw a monkey-wrench in our application. We sounded the alarm and an investigation commenced ASAP. Stack traces, logs, and diagnostics were captured and the cluster was downgraded to 7.0. As it turned out, we’d hit an edge case that was triggered by a malformed TTL index specification with a combination of two irregularities: Its expireAfterSeconds was not an integer. It contained a weights field , which is not valid in an index that’s not a text index . Both irregularities were previously allowed, but became invalid due to strengthened validation checks. When a node steps up to primary, it corrects these malformed index specifications, but in that 8.0 RC if there were two things wrong with an index it would go down an execution path that ended in a segfault. This bug only occurs when a node steps up to primary, which is why it brought down our cluster despite the rolling upgrade. SERVER-94487 was opened to fix the bug and the fix was rolled into the next RC. When the RC was ready, we upgraded the Amboy database again and the upgrade succeeded. Not a showstopper Next up was the main database cluster for the Evergreen application. We performed the upgrade, and at first all indications were that the upgrade was a success. However, on further inspection a discontinuous jump had appeared in two of the Atlas monitoring graphs. Before the upgrade our Query Executor graph usually looked like this: Figure 3. Query Executor graph before the upgrade. Whereas after the upgrade it looked like this: Figure 4. Query Executor graph after the upgrade. This represented roughly a 5x increase in the rate per second of index keys and documents scanned by queries and query plans. Similarly, the Query Targeting graph looked like this before the upgrade: Figure 5. Query Targeting graph before the upgrade. Whereas after the upgrade it looked like this: Figure 6. Query Targeting graph after the upgrade. This also represented roughly a 5x increase to the ratio of scanned index keys and documents to the number of documents returned. Both these graphs indicated there was at least one query that wasn’t using indexes as well as it had been before the upgrade. We got eyes on the cluster and it was determined that a bug in index pruning (a new feature introduced in 8.0) was causing the query planner to remove the most efficient index for a contained $or query shape. This is when a query contains an $or branch that isn’t the root of the query predicate, such as A and (C or B) . For the 8.0 release this was listed as a known issue and disabled in Atlas, and index pruning was disabled entirely by the 8.0.1 release until we can fix the underlying issue in SERVER-94741 . Other clusters Other teams’ clusters followed suit, but their upgrades went off without a hitch. It’s to be expected that the particulars of each dataset and workload would trigger various edge cases. Evergreen’s clusters hit some while the rest did not. This brings out an important lesson: testing against a variegated set of live workloads raises the likelihood we’ll encounter and address all the issues our customers would have encountered. Continuous improvement Although we caught these issues before they reached customers, our shift-left mindset motivates us to catch them earlier in the process through automated testing. As part of this effort, we plan to add additional tests focused on upgrades from older versions of the database. The index pruning issue, in particular, was part of the inspiration for us to investigate property based testing –an approach that has already uncovered several new bugs ( SERVER-89308 ). SERVER-92232 will introduce a property based test specifically for index pruning. What’s next? All told, the exercise was a success. The 8.0 upgrade reduced Evergreen’s operation execution times by an order of magnitude: Figure 7. Drastically faster database operations after the upgrade. For customers, dogfooding uncovered novel issues and gave us the chance to fix them before they could disrupt customer workloads. By the time we cut the release we were confident we were providing our customers a seamless upgrade. Through the dogfooding process we discovered additional internal teams with services built on MongoDB. And now we’re further leaning in on dogfooding by building out a formal framework that will include those teams and their clusters. For the next release, this will uncover even more insights and provide greater confidence. Looking ahead, as our CTO aptly put it , "all customers demand security, durability, availability, and performance" from their technology. Our commitment to eating our own dogfood directly strengthens these very pillars. It's a commitment to our customers, a commitment to innovation, and a commitment to making MongoDB the best database in the world. Join our MongoDB Community to learn about upcoming events, hear stories from MongoDB users, and connect with community members from around the world.

March 3, 2025

Secure by Default: Mandatory MFA in MongoDB Atlas

On March 26, 2025, MongoDB will start rolling out mandatory multi-factor authentication (MFA) for MongoDB Atlas users. While MFA has long been supported in Atlas, it was previously optional. MongoDB is committed to delivering customers the highest level of security, and the introduction of mandatory MFA adds an extra layer of protection against unauthorized access to MongoDB Atlas. Note: MFA will require users to provide a second form of authentication, such as a one-time passcode or biometrics. To ensure a smooth transition, users are encouraged to set up their preferred MFA method in advance. This process should take around three minutes to set up. If MFA is not configured by March 26, 2025, users will need to enter a one-time password (OTP) sent to their registered email each time they log in. Why are we making MFA mandatory? Stealing users’ credentials is a key tactic in the modern cyberattack playbook. According to a Verizon report, stolen credentials have been involved in 31% of data breaches in the past decade, and credential stuffing is the most common attack type for web applications. 1 Credential stuffing is when attackers use stolen credentials obtained from a data breach on one service to attempt to log in to another service. These breaches are particularly harmful, taking an average of 292 days to detect and contain. 2 This rise in cyber threats has rendered password-only security inadequate. Organizations of all sizes trust MongoDB Atlas to safeguard their mission-critical applications and sensitive data. These range from global enterprises to individual developers. Therefore, to strengthen account security and to reduce the risk of unauthorized access, MongoDB is introducing mandatory MFA. The impact of MFA A large-scale study by Microsoft measured the effectiveness of MFA to prevent cyberattacks on enterprise accounts. The findings indicated enabling MFA reduces the risk of account compromise by 99.22%. For accounts with previously leaked credentials, MFA still lowered the risk by 98.56%. This makes MFA one of the most effective defenses against unauthorized access. By default, requiring MFA strengthens the security of all MongoDB Atlas accounts. By reducing the risk of compromised accounts being used in broader attacks, this proactive step protects individual users and enhances MongoDB Atlas’s overall security. Ensuring strong authentication practices across the Atlas ecosystem maintains the integrity of mission-critical applications and sensitive data— and a safer experience for everyone is the result. Preparing for mandatory MFA MFA will be a prerequisite for all users when logging into MongoDB services using Atlas credentials. These services include: MongoDB Atlas user interface MongoDB Support portal MongoDB University MongoDB Forums Atlas supports the following MFA methods: Security key or biometrics: FIDO2 (WebAuthn) compliant security keys (e.g., YubiKey ) or biometric authentication (e.g., Apple Touch ID or Windows Hello) One-time password (OTP) and push notifications: Provided through the Okta Verify app Authenticator apps: Such as Twilio Authy , Google Authenticator , or Microsoft Authenticator for generating time-based OTPs Email: For generating OTPs MongoDB encourages users to choose phishing-resistant MFA methods, such as security keys or biometrics. Strengthening security with mandatory MFA Requiring MFA is a significant step that enhances MongoDB Atlas’s default security. Multi-factor authentication protects users from credential-based attacks and unauthorized access. Making MFA’s additional layer of authentication mandatory ensures greater account security. This safeguards mission-critical applications and data. To ensure a smooth transition, users are encouraged to set up their preferred MFA method before March 26, 2025. For detailed setup instructions, refer to the MongoDB documentation . And, please visit the MongoDB security webpage and Trust Center to learn more about MongoDB’s commitment to security.

February 28, 2025

Why Vector Quantization Matters for AI Workloads

Key takeaways As vector embeddings scale into millions, memory usage and query latency surge, leading to inflated costs and poor user experience. By storing embeddings in reduced-precision formats (int8 or binary), you can dramatically cut memory requirements and speed up retrieval. Voyage AI's quantization-aware embedding models are specifically tuned to handle compressed vectors without significant loss of accuracy. MongoDB Atlas streamlines the workflow by handling the creation, storage, and indexing of compressed vectors, enabling easier scaling and management. MongoDB is built for change, allowing users to effortlessly scale AI workloads as resource demands evolve. Organizations are now scaling AI applications from proofs of concept to production systems serving millions of users. This shift creates scalability, latency, and resource challenges for mission-critical applications leveraging recommendation engines, semantic search, and retrieval-augmented generation (RAG) systems. At scale, minor inefficiencies compound and become major bottlenecks, increasing latency, memory usage, and infrastructure costs. This guide explains how vector quantization enables high-performance, cost-effective AI applications at scale. The challenge: Scaling vector search in production Let’s start by considering a modern voice assistance platform that combines semantic search with natural language understanding. During development, the system only needs to process a few hundred queries per day, converting speech to text and matching the resulting embeddings against a modest database of responses. The initial implementation is straightforward: each query generates a 32-bit floating-point embedding vector that's matched against a database of similar vectors using cosine similarity. This approach works smoothly in the prototype phase—response times are quick, memory usage is manageable, and the development team can focus on improving accuracy and adding features. However, as the platform gains traction and scales to processing thousands of queries per second against millions of document embeddings, the simple approach begins to break down. Each incoming query now requires loading massive amounts of high-precision floating-point vectors into memory, computing similarity scores across an exponentially larger dataset, and maintaining increasingly complex vector indexes for efficient retrieval. Without proper optimization, the system struggles as memory usage balloons, query latency increases, and infrastructure costs spiral upward. What started as a responsive, efficient prototype has become a bottleneck production system that struggles to maintain its performance requirements while serving a growing user base. The key challenges are: Loading high-precision 32-bit floating-point vectors into memory Computing similarity scores across massive embedding collections Maintaining large vector indexes for efficient retrieval Which can lead to critical issues like: High memory usage as vector databases struggle to keep float32 embeddings in RAM Increased latency as systems process large volumes of high-precision data Growing infrastructure costs as organizations scale their vector operations Reduced query throughput due to computational overhead AI workloads with tens or hundreds of millions of high-dimensional vectors (e.g., 80M+ documents at 1536 dimensions) face soaring RAM and CPU requirements. Storing float32 embeddings for these workloads can become prohibitively expensive. Vector quantization: A path to efficient scaling The obvious question is: How can you maintain the accuracy of your recommendations, semantic matches, and search queries, while drastically cutting down on compute and memory usage and reducing retrieval latency? Vector quantization is how. It helps you store embeddings more compactly, reduce retrieval times, and keep costs under control. Vector quantization offers a powerful solution to scalability, latency, and resource utilization challenges by compressing high-dimensional embeddings into compact representations while preserving their essential characteristics. This technique can dramatically reduce memory requirements and accelerate similarity computations without compromising retrieval accuracy. What is vector quantization? Vector quantization is a compression technique widely applied in digital signal processing and machine learning. Its core idea is to represent numerical data using fewer bits, reducing storage requirements without entirely sacrificing the data’s informative value. In the context of AI workloads, quantization commonly involves converting embeddings—originally stored as 32-bit floating-point values—into formats like 8-bit integers. By doing so, you can substantially decrease memory and storage consumption while maintaining a level of precision suitable for similarity search tasks. An important point to note is that the quantization mechanism is especially suitable for use cases that involve over 1 million vector embeddings, such as RAG applications, semantic search, or recommendation systems that require tight control of operational costs without a compromise on retrieval accuracy. Smaller datasets with fewer than 1 million embeddings might not see significant gains from quantization procedures. For smaller datasets, the overhead of implementing quantization might outweigh its benefits. Understanding vector quantization Vector quantization operates by mapping high-dimensional vectors to a discrete set of prototype vectors or converting them to lower-precision formats. There are three main approaches: Scalar quantization: Converts individual 32-bit floating-point values to 8-bit integers, reducing memory usage of vector values by 75% while maintaining reasonable precision. Product quantization: Compresses entire vectors at once by mapping them to a codebook of representative vectors, offering better compression than scalar quantization at the cost of more complex encoding/decoding. Binary quantization: Transforms vectors into binary (0/1) representations, achieving maximum compression but with more significant information loss. A vector database that applies these compression techniques must effectively manage multiple data structures: Hierarchical navigable small world (HNSW) graph for navigable search Full-fidelity vectors (32-bit float embeddings) Quantized vectors (int8 or binary) When quantization is defined in the vector index, the system builds quantized vectors and constructs the HNSW graph from these compressed vectors. Both structures are placed in memory for efficient search operations, significantly reducing the RAM footprint compared to storing full-fidelity vectors alone. The table below illustrates how different quantization mechanisms impact memory usage and disk consumption. This example focuses on HNSW indexes storing 30 GB of original float32 embeddings alongside a 0.1 GB HNSW graph structure. Our RAM usage estimates include a 10% overhead factor (1.1 multiplier) to account for JVM memory requirements with indexes loaded into page cache, reflecting typical production deployment conditions. Actual overhead may vary based on specific configurations. Here are key attributes to consider based on the table below: Estimated RAM usage: Combines HNSW graph size with either full or quantized vectors, plus a small overhead factor (1.1 for index overhead). Disk usage: Includes storage for full-fidelity vectors, HNSW graph, and quantized vectors when applicable. Notice that while enabling quantization increases total disk usage —because you still store full-fidelity vectors for exact nearest neighbor queries in both cases and rescoring in the case of binary quantization—it dramatically decreases RAM requirements and speeds up initial retrieval . MongoDB Atlas Vector Search offers powerful scaling capabilities through its automatic quantization system . As illustrated in Figure 1 below, MongoDB Atlas supports multiple vector search indexes with varying precision levels: Float32 for maximum accuracy, Scalar Quantized (int8) for balanced performance with 3.75× RAM reduction, and Binary Quantized (1-bit) for maximum speed with 24× RAM reduction. The quantization variety provided by MongoDB Atlas allows users to optimize their vector search workloads based on specific requirements. For collections exceeding 1M vectors, Atlas automatically applies the appropriate quantization mechanism, with binary quantization particularly effective when combined with Float32 rescoring for final refinement. Figure 1: MongoDB Atlas Vector Search Architecture with Automatic Quantization Data flow through embedding generation, storage, and tiered vector indexing with binary rescoring. Binary quantization with rescoring A particularly effective strategy is to combine binary quantization with a rescoring step using full-fidelity vectors. This approach offers the best of both worlds: extremely fast lookups thanks to binary data formats, plus more precise final rankings from higher-fidelity embeddings. Initial retrieval (Binary) Embeddings are stored as binary to minimize memory usage and accelerate the approximate nearest neighbor (ANN) search. Hamming distance (via XOR + population count) is used, which is computationally faster than Euclidean or cosine similarity on floats. Rescoring The top candidate results from the binary pass are re-evaluated using their float or int8 vectors to refine the ranking. This step mitigates the loss of detail in binary vectors, balancing result accuracy with the speed of the initial retrieval. By pairing binary vectors for rapid recall with full-fidelity embeddings for final refinement, you can keep your system highly performant and maintain strong relevance. The need for quantization-aware models Not all embedding models perform equally well under quantization. Models need to be specifically trained with quantization in mind to maintain their effectiveness when compressed. Some models—especially those trained purely for high-precision scenarios—suffer significant accuracy drops when their embeddings are represented with fewer bits. Quantization-aware training (QAT) involves: Simulating quantization effects during the training process Adjusting model weights to minimize information loss Ensuring robust performance across different precision levels This is particularly important for production applications where maintaining high accuracy is crucial. Embedding models like those from Voyage AI— which recently joined MongoDB —are specifically designed with quantization awareness, making them more suitable for scaled deployments. These models preserve more of their essential feature information even under aggressive compression. Voyage AI provides a suite of embedding models specifically designed with QAT in mind, ensuring minimal loss in semantic quality when shifting to 8-bit integer or even binary representations. Figure 2: Embedding model performance comparing retrieval quality (NDCG@10) versus storage costs . Voyage AI models (green) maintain superior retrieval quality even with binary quantization (triangles) and int8 compression (squares), achieving up to 100x storage efficiency compared to standard float embeddings (circles) . The graph above shows several important patterns that demonstrate why quantization-aware training (QAT) is crucial for maintaining performance under aggressive compression. The Voyage AI family of models (shown in green) demonstrates strong performance in retrieval quality even under extreme compression. The voyage-3-large model demonstrates this dramatically—when using int8 precision at 1024 dimensions, it performs nearly identically to its float precision, 2048-dimensional counterpart, showing only a minimal 0.31% quality reduction despite using 8 times less storage. This showcases how models specifically designed with quantization in mind can preserve their semantic understanding even under substantial compression. Even more impressive is how QAT models maintain their edge over larger, uncompressed models. The voyage-3-large model with int8 precision and 1024 dimensions outperforms OpenAI-v3-large (using float precision and 3072 dimensions) by 9.44% while requiring 12 times less storage. This performance gap highlights that raw model size and dimension count aren't the decisive factors —it's the intelligent design for quantization that matters. The cost implications become truly striking when we examine binary quantization. Using voyage-3-large with 512-dimensional binary embeddings, we still achieve better retrieval quality than OpenAI-v3-large with its full 3072-dimensional float embeddings while using 200 times less storage. To put this in practical terms: what would have cost $20,000 in monthly storage can be reduced to just $100 while actually improving performance. In contrast, models not specifically trained for quantization, such as OpenAI's v3-small (shown in gray), show a more dramatic drop in retrieval quality as compression increases. While these models perform well in their full floating-point representation (at 1x storage cost), their effectiveness deteriorates more sharply when quantized, especially with binary quantization. For production applications where both accuracy and efficiency are crucial, choosing a model that has undergone quantization-aware training can make the difference between a system that degrades under compression and one that maintains its effectiveness while dramatically reducing resource requirements. Read more on the Voyage AI blog . Impact: Memory, retrieval latency, and cost Vector quantization addresses the three core challenges of large-scale AI workloads—memory, retrieval latency, and cost—by compressing full-precision embeddings into more compact representations. Below is a breakdown of how quantization drives efficiency in each area. Figure 3: Quantization Performance Metrics: Memory Savings with Minimal Accuracy Trade-offs Comparison of scalar vs. binary quantization showing RAM reduction (75%/96%), query accuracy retention (99%/95%), and performance gains (>100%) for vector search operations Memory and storage optimization Quantization techniques dramatically reduce compute resource requirements while maintaining search accuracy for vector embeddings at scale. Lower RAM footprint Storage in RAM is often the primary bottleneck for vector search systems Embeddings stored as 8-bit integers or binary reduce overall memory usage, allowing significantly more vectors to remain in memory. This compression directly shrinks vector indexes (e.g., HNSW), leading to faster lookups and fewer disk I/O operations. Reduced disk usage in collection with binData binData (binary) formats can cut raw storage needs by up to 66%. Some disk overhead may remain when storing both quantized and original vectors, but the performance benefits justify this tradeoff. Practical gains 3.75× reduction in RAM usage with scalar (int8) quantization Up to 24× reduction with binary quantization, especially when combined with rescoring to preserve accuracy. Significantly more efficient vector indexes, enabling large-scale deployments without prohibitive hardware upgrades. Retrieval latency Quantization methods leverage CPU cache optimizations and efficient distance calculations to accelerate vector search operations beyond what's possible with standard float32 embeddings. Faster similarity computations Smaller data types are more CPU-cache-friendly, which speeds up distance calculations. Binary quantization uses Hamming distance (XOR + popcount), yielding dramatically faster top-k candidate retrieval. Improved throughput With reduced memory overhead, the system can handle more concurrent queries at lower latencies. In internal benchmarks, query performance for large-scale retrievals improved by up to 80% when adopting quantized vectors. Cost efficiency Vector quantization provides substantial infrastructure savings by reducing memory and computation requirements while maintaining retrieval quality through compression and rescoring techniques. Lower infrastructure costs Smaller vectors consume fewer hardware resources, enabling deployments on less expensive instances or tiers. Reduced CPU/GPU time per query allows resource reallocation to other critical parts of the application. Better scalability As data volumes grow, memory and compute requirements don’t escalate as sharply. Quantization-aware training (QAT) models, such as those from Voyage AI, help maintain accuracy while reaping cost savings at scale. By compressing vectors into int8 or binary formats, you tackle memory constraints, accelerate lookups, and curb infrastructure expenses—making vector quantization an indispensable strategy for high-volume AI applications. MongoDB Atlas: Built for Changing Workloads with Automatic Vector Quantization The good news for developers is that MongoDB Atlas supports “automatic scalar” and “automatic binary quantization” in index definitions, reducing the need for external scripts or manual data preprocessing. By quantizing at index build time and query time, organizations can run large-scale vector workloads on smaller, more cost-effective clusters. A common question most developers ask is when to use quantization. Quantization becomes most valuable once you reach substantial data volumes—on the order of a million or more embeddings. At this scale, memory and compute demands can skyrocket, making reduced memory footprints and faster retrieval speeds essential. Examples of cases that call for quantization include: High-volume scenarios: Datasets with millions of vector embeddings where you must tightly control memory and disk usage. Real-time responses: Systems needing low-latency queries under high user concurrency. High query throughput: Environments with numerous concurrent requests demanding both speed and cost-efficiency. For smaller datasets (under 1 million vectors), the added complexity of quantization may not justify the benefits. However, for large-scale deployments, it becomes a critical optimization that can dramatically improve both performance and cost-effectiveness. Now that we have established a strong foundation on the advantages of quantization—specifically the benefits of binary quantization with rescoring— feel free to refer to the MongoDB documentation to learn more about implementing vector quantization. You can also learn more about Voyage AI’s state-of-the-art embedding models on our product page .

February 27, 2025

Hasura: Powerful Access Control on MongoDB Data

Across industries—and especially in highly regulated sectors like healthcare, financial services, and government—MongoDB has been a preferred modern database solution for organizations handling large volumes of sensitive data that require strict compliance adherence. In such enterprises, secure access to data via APIs is critical, particularly when information is distributed across multiple MongoDB databases and external data stores. Hasura extends and enhances MongoDB's access control capabilities by providing granular permissions at the column and field level across multiple databases through its unified interface. At the same time, designing a secure API system from scratch to meet this need takes significant development resources and becomes a burden to maintain and update. Hasura solves this problem for enterprises by elegantly serving as a federated data layer, with robust access control policies built-in. Hasura enforces powerful access control rules across data domains, joins data from multiple sources, and exposes it to the user via a single API. In this blog, we'll explore how Hasura and MongoDB work together to empower teams with granular data access control while simplifying data retrieval across collections. Team-specific data domains First, Hasura makes it possible for a business unit or team to own a set of databases and collections, also known as a data domain. Within each domain, a team can connect any number of MongoDB databases and other data sources, allowing the domain to have fine-grained role-based access control (RBAC) and attribute-based access control (ABAC) across all sources. More important though, is the ability to enable relationships that span domains, effectively connecting data from various teams or business units and exposing it to a verified user as necessary. This granular permissioning system means that the right users can access the right data at the right time, without compromising security. Field-level access control Hasura’s MongoDB connector also provides a powerful, declarative way to define access control rules at the collection and field level. For each MongoDB collection, roles may be specified for read, create, update, and delete (CRUD) permissions. Within those permissions, access may be further restricted based on the values of specific attributes. By defining these rules declaratively, Hasura makes it easy to implement and reason about complex access control policies. Joining across collections In addition to enabling granular access control, Hasura simplifies the retrieval of related data across multiple databases. By inspecting your MongoDB collections, Hasura can automatically create schemas and API endpoints (in GraphQL, REST, etc.) that let you query data along with its relationships. This eliminates the need to manually stitch together data from different collections in your application code. Instead, a graph of related data can be easily retrieved in a single API call, while still having that data filtered through your access control rules. As companies wrestle with the challenges of secure data access across sprawling database environments, Hasura provides a compelling solution. By serving as a federated data layer on MongoDB and external data, Hasura enables granular access control through a combination of role-based permissions, attribute-based restrictions, and the ability to join data and apply access across sources. Figure 1. Hasura & MongoDB demo environment With Hasura’s MongoDB connector , teams can easily implement sophisticated data access policies in a declarative way and provide their applications with secure access to the data they need. This combination of security and simplicity makes Hasura and MongoDB a powerful solution for organizations that strive to modernize, especially those in industries with strict compliance requirements. Visit the MongoDB Resources Hub to learn more about MongoDB Atlas.

February 26, 2025

동네알바, MongoDB Atlas Search로 200만명의 구직자와 고용주를 연결하다

오늘날 채용 환경은 보다 효율적이고 유연하며 투명 방식으로 빠르게게 진화하고 있습니다. 이에 발맞춰, 국내 스타트업 라라잡이 운영하는 구인구직 플랫폼 동네알바 는 ‘바른 사람과 바른 일터가 서로 돕고 신뢰하는 세상을 만든다’는 미션 아래 지역 기반 맞춤형 채용 서비스를 제공하며 비정규직 채용 시장에서 신뢰할 수 있는 플랫폼으로 자리잡고 있습니다. 동네알바는 앱 출시 후 단 4개월 만에 10만 명의 사용자를 확보했으며, 현재 연간 200만 명의 아르바이트생과 고용주가 이용하는 플랫폼으로 성장했습니다. 2023년, 한국 최대 취업정보 제공업체인 사람인에 인수된 이후에는 구인구직을 넘어 인사관리까지 가능한 HR SaaS 플랫폼으로 사업을 확장하고 있습니다. 사용자 친화적인 동네알바의 앱 인터페이스 MongoDB Atlas Search로 검색 성능 혁신 라라잡 개발팀은 수백만 명의 사용자에게 원활하고 안정적인 채용 경험을 제공하기 위해 일찌감치 MongoDB를 도입했습니다. 특히, MongoDB Atlas가 제공하는 유연성과 빠른 속도, 확장성을 기반으로 동네알바의 사용자 경험을 지속적으로 개선하고 있습니다. 플랫폼 개선을 위한 첫 단계로, 2023년 5월 MongoDB Atlas Search를 도입했습니다. 방대한 구인 목록과 사용자 쿼리를 신속하게 처리할 수 있는 강력한 검색 엔진이 필요했던 라라잡은 기존의 MongoDB Atlas 환경에서 별도 인프라 구축이나 추가 비용 없이 MongoDB Atlas Search를 손쉽게 적용할 수 있었습니다. 라라잡 백우락 백엔드팀 리드는 “MongoDB Atlas Search 도입 후 검색 속도가 4배, 집계 속도가 15배 향상되는 등 성능이 크게 개선되었고, 이는 사용자 경험 향상으로 이어졌다”며 “MongoDB Atlas Search는 별도의 컬렉션 구성 없이 원하는 구축 방식과 용도에 맞게 적용할 수 있다는 점이 가장 큰 장점”이라고 밝혔습니다. 또한, “MongoDB가 제공하는 비정형 데이터 처리 기능을 활용해, 동네알바 앱 내 구인 목록부터 사용자 프로필까지 다양한 데이터 유형을 효율적으로 관리하고 있다. 이를 통해 아르바이트생을 위한 추천 공고 탐색 서비스를 최적화하고, 단 두 달만에 Atlas Search 기반 컬렉션을 구축해 최적의 검색 환경을 완성했다”고 강조했습니다. 안전하고 직관적인 검색 기능 구현 MongoDB Atlas Search는 이름, 연락처 등 암호화된 데이터의 인덱싱을 지원해 익명 처리된 정보에 대한 안전하면서도 정확한 검색 결과를 제공합니다. 또한, MongoDB의 지리공간 연산자(Geospatial Query Operators)를 활용해 지도 내 반경 검색을 구현, 지역 기반 서비스라는 동네알바의 특성을 극대화했습니다. 이러한 기능을 통해 모바일 환경에 최적화된 직관적인 서비스를 제공하며, 사용자의 편의성을 한층 더 높였습니다. 신뢰할 수 있는 채용 플랫폼으로의 도약 라라잡은 투명하고 신뢰할 수 있는 채용 환경을 조성하는 것을 목표로, 아르바이트생과 고용주가 안심하고 사용할 수 있는 플랫폼을 만들어가고 있습니다. MongoDB는 이러한 기술 혁신을 뒷받침하는 핵심 파트너로서 중요한 역할을 하고 있으며, 향후 AI 기능 도입, 추천 및 모니터링 시스템 향상 등 MongoDB의 다양한 기술을 적극 활용할 계획입니다. 라라잡 백우락 백엔드팀 리드는 “MongoDB의 유연성과 확장성을 바탕으로 동네알바를 사용하는 수백만 명의 사용자를 지원할 뿐만 아니라 근로자와 고용주 모두에게 안전하고 효율적이며 신뢰할 수 있는 환경을 조성하는 플랫폼을 구축해 나갈 계획”이라며 “앞으로도 비정형 데이터를 쉽게 관리해야 하는 곳이라면 어디든 MongoDB를 적극 활용할 것”이라고 포부를 밝혔습니다.

February 26, 2025

Debunking MongoDB Myths: Enterprise Use Cases

MongoDB is frequently viewed as a go-to database for proof-of-concept (POC) applications. The flexibility of MongoDB’s document model enables teams to rapidly prototype and iterate. This allows for adaptation of the data model as requirements evolve during the early stages of application development. It is common for applications to continuously evolve during initial development. However, moving an application to production requires developers to add validation logic and fully define the data structures. A frequent assumption is that because MongoDB data models can be flexible, they can not be structured. However, while MongoDB does not require a defined schema, it does support them. MongoDB allows users to precisely calibrate rules and enforcement levels for every component of data. This enables a level of granular control that traditional databases, with their all-or-nothing approach to schema enforcement, struggle to match. Data model flexibility is not a binary choice between "schemaless" or "strictly enforced." More accurately, it exists on a spectrum in MongoDB. Users can incrementally define schemas in parallel with the overall “hardening” of the application. MongoDB's approach to data modeling makes it an ideal platform for business-critical applications. It is designed to support the entire application lifecycle; from nascent concepts and initial prototypes, to global rollouts of production environments. Enterprise-grade features like ACID transactions and industry-leading scalability ensure MongoDB can meet the demands of any modern application. Learning from the past So why do misconceptions persist regarding MongoDB? These perceptions originated over a decade ago. Teams working with MongoDB back in 2014 or earlier faced challenges when deploying it in production. Applications could slow down under heavy loads, data consistency was not guaranteed when writing to multiple documents, and teams lacked tools to monitor and manage deployments effectively. As a result, MongoDB gained a perception of being unsuitable for specific use cases or critical workloads. This perception has persisted despite a decade of subsequent development and innovation . Therefore, this is now an inaccurate assessment of today’s preeminent document database. MongoDB has evolved into a mature platform that directly addresses these historical pain points. Today’s MongoDB delivers robust tooling, guaranteed consistency, and comprehensive data validation capabilities. Myth: MongoDB is a niche database What are the top use cases for MongoDB? This question is difficult to answer because MongoDB is a general-purpose database that can support any use case. The document model is the primary driver of MongoDB’s versatility. Documents are similar to JSON objects with data being represented as key-value pairs. Values can be simple types like strings or numbers. However, values can also be arrays or nested objects which allows documents to easily represent complex hierarchical structures. The document model's flexibility allows data to be stored exactly as the application consumes it. This enables highly efficient writing and optimizes data for retrieval without needing to set up standard or materialized views, although both are supported . While MongoDB is no longer a niche database, it does have advanced capabilities to support niche requirements. The aggregation pipeline provides a powerful framework for data analytics and transformation. Time-series collections store and query temporal data efficiently to support IoT and financial applications. Geospatial indexes and queries enable location-based applications to perform complex proximity calculations. MongoDB Atlas includes native support for vector search . This enabled Cisco to experiment with generative AI use cases and streamline their applications to production. MongoDB handles the diverse data requirements that power modern applications. The document model provides the foundation for general use. Concurrently, advanced features ensure teams do not need to integrate additional tools as application requirements evolve. The result is a single platform that can grow from prototype to production, handling general requirements and specialized workloads with equal proficiency. Myth: MongoDB is not suitable for enterprise-grade workloads A common perception is that MongoDB works well for small applications but falls short at enterprise scale. Ironically, many organizations first consider MongoDB while struggling to scale their relational databases. These organizations have discovered MongoDB’s architecture is specifically designed to support scale-out distributed deployments. While MongoDB matches relational databases in vertical scaling capabilities, the document model enables a more natural and intuitive approach for horizontal scaling. Related data is stored together in a single document. Therefore, MongoDB can easily distribute complete data units across shards. This contrasts with relational databases. Relational data is split across multiple tables. This makes it difficult to place all related data on the same shard. Horizontal scaling with MongoDB sets an organization up for better performance. Most MongoDB queries need to access only a single shard. Equivalent queries in a relational database often require costly cross-server communication. Telefonica Tech has leveraged horizontal scaling to nearly double their capacity with a 40% hardware reduction . MongoDB Atlas further automates and simplifies these scaling capabilities through a fully managed service built to meet demanding enterprise requirements. Atlas provides a 99.995% uptime guarantee and availability across AWS, Google Cloud, and Azure in over 100 regions worldwide. This frees teams to focus on rapid development and innovation rather than infrastructure maintenance by offloading the operational complexity of deploying and running databases at scale. Powering the enterprise applications of today and tomorrow Over 50,000 customers and 70% of the Fortune 100 rely on MongoDB to power their enterprise applications. Independent industry reports from Gartner and Forrester continue to recognize MongoDB as a leader in the database space. Do not let outdated myths prevent your organization from the competitive advantages of MongoDB's enterprise capabilities. To learn more about MongoDB, head over to MongoDB University and take our free Intro to MongoDB course . Read more about customers building on MongoDB. Read our first blog in this series about myths around MongoDB vs relational databases. Check out the full video to learn about the other 6 myths that we're debunking in this series.

February 25, 2025

Redefining the Database for AI: Why MongoDB Acquired Voyage AI

This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . AI is reshaping industries, redefining customer experiences, and transforming how businesses innovate, operate, and compete. While much of the focus is on frontier models, a fundamental challenge lies in data—how it is stored, retrieved, and made useful for AI applications. The democratization of AI-powered software depends on building on top of the right abstractions, yet today, creating useful, real-time AI applications at scale is not feasible for most organizations. The challenge isn’t just complexity—it’s trust. AI models are probabilistic, meaning their outputs aren’t deterministic and predictable. This is easily evident in the hallucination problem in chatbots today, and becomes even more critical with the rise of agents, where AI systems make autonomous decisions. Development teams need the ability to control, shape, and ground generated outputs to align with their objectives and ensure accuracy. AI-powered search and retrieval is a powerful tool that extracts relevant contextual data from specific sources, augmenting AI models to generate reliable and accurate responses or take responsible and safe actions, as seen in the prominent retrieval augmented generation (RAG) approach. At the core of AI-powered retrieval are embedding generation and reranking—two key AI components that capture the semantic meaning of data and assess the relevance of queries and results. We believe embedding generation and reranking, as well as AI-powered search, belong in the database layer, simplifying the stack and creating a more reliable foundation for AI applications. By bringing more intelligence into the database, we help businesses mitigate hallucinations, improve trustworthiness, and unlock AI’s full potential at scale. The most impactful applications require a flexible, intelligent, and scalable data foundation. That’s why we’re excited to announce the acquisition of Voyage AI , a leader in embedding and reranking models that dramatically improve accuracy through AI-powered search and retrieval. This move isn’t just about adding AI capabilities— it’s about redefining the database for the AI era . Why this matters: The future of AI is built on better relevance and accuracy in data AI is probabilistic—it’s not built like traditional software with pre-defined rules and logic. Instead, it generates responses or takes action based on how the AI model is trained and what data is retrieved. However, due to the probabilistic nature of the technology, AI can hallucinate. Hallucinations are a direct consequence of poor or imprecise retrieval—when AI lacks access to the right data, it generates plausible but incorrect information. This is a critical barrier to AI adoption, especially in enterprises and for mission-critical use cases where accuracy is non-negotiable. This makes retrieving the most relevant data essential for AI applications to deliver high-quality, contextually accurate results. Today, developers rely on a patchwork of separate components to build AI-powered applications. Sub-optimal choices of these components, such as embedding models, can yield low-relevancy data retrieval and low-quality generated outputs. This fragmented approach is complex, costly, inefficient, and cumbersome for developers. With Voyage AI, MongoDB solves this challenge by making AI-powered search and retrieval native to the database. Instead of implementing workarounds or managing separate systems, developers can generate high-quality embeddings from real-time operational data, store vectors, perform semantic search, and refine results—all within MongoDB. This eliminates complexity and delivers higher accuracy, lower latency, and a streamlined developer experience. What Voyage AI brings to MongoDB Voyage AI has built a world-class AI research team with roots at Stanford, MIT, UC Berkeley, and Princeton and has rapidly become a leader in high-precision AI retrieval. Their technology is already trusted by some of the most advanced AI startups, including Anthropic, LangChain, Harvey, and Replit. Notably, Voyage AI’s embedding models are the highest-rated zero-shot models in the Hugging Face community. Voyage AI’s models are designed to increase the quality of generated output by: Enhancing vector search by creating embeddings that better capture meaning across text, images, PDFs, and structured data. Improving retrieval accuracy through advanced reranking models that refine search results for AI-powered applications. Enabling domain-specific AI with fine-tuned models optimized for different industries such as financial services, healthcare, and law, and use cases such as code generation. By integrating Voyage AI’s retrieval capabilities into MongoDB, we’re helping organizations more easily build AI applications with greater accuracy and reliability—without unnecessary complexity. How Voyage AI will be integrated into MongoDB We are integrating Voyage AI with MongoDB in three phases. In the first phase, Voyage AI’s text embedding, multi-modal embedding, and reranking models will remain widely available through Voyage AI’s current APIs and via the AWS and Azure Marketplaces—ensuring developers can continue to use their best-in-class embedding and reranking capabilities. We will also invest in the scalability and enterprise readiness of the platform to support the increased adoption of Voyage AI’s models. Next, we will seamlessly embed Voyage AI’s capabilities into MongoDB Atlas , starting with an auto-embedding service for Vector Search, which will handle embedding generation automatically. Native reranking will follow, allowing developers to boost retrieval accuracy instantly. We also plan to expand domain-specific AI capabilities to better support different industries (e.g., financial services, legal, etc.) or use cases (e.g., code generation). Finally, we will advance AI-powered retrieval with enhanced multi-modal capabilities, enabling seamless retrieval and ranking of text, images, and video. We also plan to introduce instruction-tuned models, allowing developers to refine search behavior using simple prompts instead of complex fine-tuning. This will be complemented by embedding lifecycle management in MongoDB Atlas, ensuring continuous updates and real-time optimization for AI applications. What this means for developers and businesses AI-powered applications need more than a database that just stores, processes, and persists data—they need a database that actively improves retrieval accuracy, scales seamlessly, and eliminates operational friction. With Voyage AI, MongoDB redefines what’s required for a database to underpin mission-critical AI-powered applications. Developers will no longer need to manage external embedding APIs, standalone vector stores, or complex search pipelines. AI retrieval will be built into the database itself, making semantic search, vector retrieval, and ranking as seamless as traditional queries. For businesses, this translates to faster time-to-value and greater confidence in scaling AI applications. By delivering high-quality results at scale, enterprises can seamlessly integrate AI into their most critical use cases, ensuring reliability, performance, and real-world impact. Looking ahead: What comes next This is just the beginning. Our vision is to make MongoDB the most powerful and intuitive database for modern, AI-driven applications. Voyage AI’s models will soon be natively available in MongoDB Atlas. We will continue evolving MongoDB’s AI retrieval capabilities, making it smarter, more adaptable, and capable of handling a wider range of data types and use cases. Stay tuned for more details on how you can start using Voyage AI’s capabilities in MongoDB. To learn more about how MongoDB and Voyage AI are powering state-of-the-art AI search and retrieval for building, scaling, and deploying intelligent applications, visit our product page .

February 24, 2025

Redefiniendo la base de datos para la IA: Por qué MongoDB adquirió Voyage AI

La IA está transformando las industrias, redefiniendo las experiencias de los clientes y cambiando cómo las empresas innovan, operan y compiten. Aunque gran parte de la atención se centra en los modelos de vanguardia, un desafío fundamental radica en los datos: cómo se almacenan, se recuperan y se hacen útiles para las aplicaciones de IA. La democratización del software impulsado por IA depende de construir sobre las abstracciones correctas; sin embargo, hoy en día, crear aplicaciones de IA útiles y en tiempo real a escala no es factible para la mayoría de las organizaciones. El desafío no es solo la complejidad, sino la confianza. Los modelos de IA son probabilísticos, lo que significa que sus resultados no son deterministas ni predecibles. Esto queda fácilmente en evidencia en el problema de las alucinaciones en los chatbots actuales, y se vuelve aún más crítico con el auge de los agentes, donde los sistemas de IA toman decisiones autónomas. Los equipos de desarrollo necesitan la capacidad de controlar, dar forma y fundamentar los resultados generados para alinearse con sus objetivos y garantizar la precisión. La búsqueda y recuperación impulsadas por IA es una herramienta potente que extrae datos contextuales relevantes de fuentes específicas, mejorando los modelos de IA para generar respuestas confiables y precisas o tomar acciones responsables y seguras, como se observa en el destacado enfoque de generación aumentada por recuperación (RAG). En el núcleo de la recuperación impulsada por IA están la generación de incrustación y reclasificación, dos componentes clave de la IA que capturan el significado semántico de los datos y evalúan la relevancia de las consultas y los resultados. Creemos que la generación de incrustación y reclasificación, así como la búsqueda potenciada por IA, deben integrarse en la capa de base de datos, simplificando la arquitectura y creando una base más confiable para las aplicaciones de IA. Al incorporar más inteligencia en la base de datos, ayudamos a las empresas a mitigar las alucinaciones, mejorar la confiabilidad y liberar todo el potencial de la IA a gran escala. Las aplicaciones más impactantes requieren una base de datos flexible, inteligente y escalable. Por eso nos complace anunciar la adquisición de Voyage AI , un líder en modelos de incrustación y reclasificación que mejoran drásticamente la precisión mediante la búsqueda y recuperación impulsadas por IA. Este movimiento no solo se trata de agregar capacidades de IA, sino de redefinir la base de datos para la era de la IA. Por qué esto importa: El futuro de la IA se construye sobre una mejor relevancia y precisión en los datos La IA es probabilística; no se construye como el software tradicional con reglas y lógica predefinidas. En su lugar, genera respuestas o toma medidas basándose en cómo se entrena el modelo de IA y qué datos se recuperan. Sin embargo, debido a la naturaleza probabilística de la tecnología, la IA puede tener alucinaciones. Las alucinaciones son una consecuencia directa de una recuperación deficiente o imprecisa: cuando la IA no tiene acceso a los datos correctos, genera información plausible, pero incorrecta. Esta es una barrera crítica para la adopción de la IA, especialmente en las empresas y para casos de uso críticos donde la precisión no es negociable. Esto hace que la recuperación de los datos más relevantes sea esencial para que las aplicaciones de IA ofrezcan resultados de alta calidad y precisos en su contexto. Hoy en día, los desarrolladores dependen de un conjunto de componentes separados para desarrollar aplicaciones basadas en IA. Las elecciones subóptimas de estos componentes, como los modelos de incrustación, pueden resultar en una recuperación de datos de baja relevancia y en la generación de resultados de baja calidad. Este enfoque fragmentado es complejo, costoso, ineficiente y engorroso para los desarrolladores. Con Voyage AI, MongoDB resuelve este desafío al hacer que la búsqueda y recuperación impulsadas por IA sean nativas de la base de datos. En lugar de implementar soluciones alternativas o gestionar sistemas separados, los desarrolladores pueden generar incrustaciones de alta calidad a partir de datos operativos en tiempo real, almacenar vectores, realizar búsquedas semánticas y refinar resultados, todo dentro de MongoDB. Esto elimina la complejidad y proporciona una mayor precisión, menor latencia y una experiencia de desarrollador optimizada. Lo que Voyage AI aporta a MongoDB Voyage AI ha formado un equipo de investigación de IA de clase mundial con sede en Stanford, MIT, UC Berkeley y Princeton, y se ha convertido rápidamente en un líder en la recuperación de IA de alta precisión. Su tecnología ya cuenta con la confianza de algunas de las startups de IA más avanzadas, incluidas Anthropic, LangChain, Harvey y Replit. En particular, los modelos de incrustación de Voyage AI son los modelos zero-shock mejor calificados en la comunidad Hugging Face. Los modelos de Voyage AI están diseñados para mejorar la calidad del resultado generado: Al mejorar la búsqueda vectorial creando incrustaciones que capturen mejor el significado en texto, imágenes, PDF y datos estructurados. Al mejorar la precisión de recuperación mediante modelos avanzados de reclasificación que optimizan los resultados de búsqueda para aplicaciones basadas en IA. Al habilitar la IA específica del dominio con modelos ajustados y optimizados para diferentes sectores, como los servicios financieros, la atención médica y el derecho, y casos de uso, como la generación de código. Al integrar las capacidades de recuperación de Voyage AI en MongoDB, estamos ayudando a las organizaciones a crear aplicaciones de IA con mayor precisión y confiabilidad, sin complejidad innecesaria. Cómo se integrará Voyage AI en MongoDB Estamos integrando Voyage AI con MongoDB en tres fases. En la primera fase, los modelos de incrustación de texto, incrustación multimodal y reclasificación de Voyage AI seguirán estando ampliamente disponibles a través de las API actuales de Voyage AI y a través de los mercados de AWS y Azure, asegurando que los desarrolladores puedan continuar utilizando sus capacidades de incrustación y reclasificación de clase mundial. También invertiremos en la escalabilidad y la preparación empresarial de la plataforma para apoyar la mayor adopción de los modelos de Voyage AI. Luego, integraremos sin problemas las capacidades de Voyage AI en MongoDB Atlas , comenzando con un servicio de incrustación automática para Vector Search, que manejará la generación de incrustaciones automáticamente. La reclasificación nativa seguirá, permitiendo a los desarrolladores mejorar instantáneamente la precisión de la recuperación. También tenemos previsto ampliar las capacidades de IA específicas de cada dominio para ofrecer un mejor soporte a diferentes sectores (por ejemplo, servicios financieros, legales, etc.) o casos de uso (por ejemplo, generación de código). Finalmente, avanzaremos en la recuperación impulsada por IA con capacidades multimodales mejoradas, permitiendo una recuperación y clasificación fluida de texto, imágenes y videos. También planeamos introducir modelos ajustados a las instrucciones, lo que permitirá a los desarrolladores refinar el comportamiento de búsqueda mediante indicaciones simples en lugar de un ajuste complejo. Esto se complementará mediante la integración de la gestión del ciclo de vida en MongoDB Atlas, asegurando actualizaciones continuas y optimización en tiempo real para aplicaciones de IA. Qué significa esto para los desarrolladores y las empresas Las aplicaciones impulsadas por IA necesitan más que una base de datos que solo almacene, procese y persista datos: necesitan una base de datos que mejore activamente la precisión de recuperación, escale sin problemas y elimine la fricción operativa. Con Voyage AI, MongoDB redefine lo que se requiere para que una base de datos sustente aplicaciones críticas impulsadas por IA. Los desarrolladores ya no necesitarán gestionar API de incrustación externas, almacenes de vectores independientes o flujos de búsqueda complejos. La recuperación de IA se integrará en la base de datos misma, haciendo que la búsqueda semántica, la recuperación de vectores y la clasificación sean tan fluidas como las consultas tradicionales. Para las empresas, esto se traduce en un tiempo más rápido para obtener valor y una mayor confianza al escalar aplicaciones de IA. Al proporcionar resultados de alta calidad a gran escala, las empresas pueden integrar a la perfección la IA en sus casos de uso más críticos, asegurando la confiabilidad, el rendimiento y el impacto en el mundo real. Con miras al futuro: qué sigue Esto es solo el comienzo. Nuestra visión es convertir a MongoDB en la base de datos más potente e intuitiva para aplicaciones modernas impulsadas por IA. Los modelos de Voyage AI pronto estarán disponibles de forma nativa en MongoDB Atlas. Continuaremos evolucionando con las capacidades de recuperación de IA de MongoDB, haciéndolas más inteligentes, más adaptables y capaces de manejar una gama más amplia de tipos de datos y casos de uso. Manténgase pendiente para más detalles sobre cómo puede comenzar a utilizar las capacidades de Voyage AI en MongoDB. Para obtener más información sobre cómo MongoDB y Voyage AI están impulsando la búsqueda y recuperación de IA de última generación para construir, escalar e implementar aplicaciones inteligentes, visite nuestra página de productos .

February 24, 2025

AI용 데이터베이스 재정의: MongoDB가 Voyage AI를 인수한 이유

AI는 산업을 재편하고, 고객 경험을 새롭게 정의하며, 기업의 혁신, 운영, 경쟁 방식을 변화시키고 있습니다. 대부분의 초점이 프론티어 모델에 맞추어져 있지만, 근본적인 과제는 데이터를 저장하고 조회하여 AI 애플리케이션에 유용하게 만드는 방법에 있습니다. AI 기반 소프트웨어의 민주화는 적절한 추상화 위에 구축하는 데 달려 있지만, 현재 대부분의 조직에서 유용한 실시간 AI 애플리케이션을 대규모로 생성하는 것은 불가능합니다. 과제는 단지 복잡성만이 아닌 신뢰입니다. AI 모델은 확률론적이기 때문에 출력이 결정론적이지 않고 예측할 수 없습니다. 이는 오늘날 챗봇의 환각 문제에서 쉽게 확인할 수 있으며, AI 시스템이 자율적으로 의사 결정을 내리는 에이전트의 부상으로 더욱 중요해졌습니다. 개발 팀은 생성된 출력을 목표에 맞게 조정하고 정확성을 보장하기 위해 출력을 제어하고 구체화하며 근거를 마련할 수 있는 능력이 필요합니다. AI 기반 검색 및 조회는 특정 소스에서 관련 문맥 데이터를 추출하여 신뢰할 수 있고 정확한 응답을 생성하거나 책임감 있고 안전한 조치를 취하도록 AI 모델을 보강하는 강력한 도구입니다. 이는 RAG(검색 증강 생성) 접근 방식에서 두드러지게 나타납니다. AI 기반 검색의 핵심은 데이터의 의미론적 의미를 포착하고 쿼리와 결과의 관련성을 평가하는 두 가지 주요 AI 구성 요소인 임베딩 생성과 순위 재지정입니다. 임베딩 생성과 순위 재지정 및 AI 기반 검색은 데이터베이스 계층에 포함되어 스택을 간소화하고 AI 애플리케이션을 위한 보다 안정적인 기반을 마련한다고 생각합니다. 데이터베이스에 보다 많은 지능을 활용하여 MongoDB는 기업이 환각을 완화하고, 신뢰성을 높이고, AI의 잠재력을 대규모로 최대한 발휘할 수 있도록 지원합니다. 가장 영향력 있는 애플리케이션에는 유연하고 지능적이며 확장 가능한 데이터 기반이 필요합니다. 그래서 AI 기반 검색 및 조회를 통해 정확도를 획기적으로 개선하는 임베딩 및 순위 재지정 모델 분야의 선두주자인 Voyage AI 인수를 발표하게 되어 기쁩니다. 이러한 움직임은 단순히 AI 기능을 추가하는 것이 아니라 AI 시대에 맞춰 데이터베이스를 재정의하는 것입니다. 이것이 중요한 이유: AI의 미래는 데이터의 더 나은 관련성과 정확성을 기반으로 구축됩니다 AI는 확률론적입니다. 사전 정의된 규칙과 논리를 갖춘 기존 소프트웨어처럼 구축되지 않았습니다. 대신 AI 모델의 훈련 방식과 조회된 데이터에 따라 응답을 생성하거나 조치를 취합니다. 그러나 기술의 확률론적 특성으로 인해 AI는 환각을 일으킬 수 있습니다. 환각은 잘못되거나 부정확한 검색의 직접적인 결과로, AI가 적절한 데이터에 액세스하지 못하면 그럴듯하지만 잘못된 정보를 생성합니다. 이는 특히 정확도를 타협할 수 없는 엔터프라이즈 및 미션 크리티컬 사용 사례에서 AI 도입을 가로막는 중대한 장벽입니다. 따라서 가장 관련성이 높은 데이터를 검색하는 것은 AI 애플리케이션이 고품질의 맥락에 맞는 정확한 결과를 제공하는 데 필수적입니다. 오늘날 개발자들은 AI 기반 애플리케이션을 구축하기 위해 개별 구성 요소의 조합에 의존합니다. 임베딩 모델과 같은 이러한 구성 요소를 최적으로 선택하지 않으면 데이터 검색의 관련성이 낮아지고 생성된 출력의 품질이 낮아질 수 있습니다. 이러한 단편적인 접근 방식은 복잡하고 비용이 많이 들며 비효율적이고 개발자에게 번거롭습니다. MongoDB는 Voyage AI를 통해 데이터베이스에 AI 기반 검색 및 조회 기능을 네이티브로 제공하여 이 과제를 해결합니다. 개발자는 해결 방법을 구현하거나 별도의 시스템을 관리하는 대신 MongoDB 내에서 실시간 운영 데이터로부터 고품질 임베딩을 생성하고, 벡터를 저장하고, 시맨틱 검색을 수행하고, 결과를 정제할 수 있습니다. 이를 통해 복잡성을 제거하고 더 높은 정확도, 더 짧은 지연 시간, 간소화된 개발자 경험을 제공할 수 있습니다. Voyage AI가 MongoDB에 제공하는 기능 Voyage AI는 Stanford, MIT, UC Berkeley, Princeton에 뿌리를 둔 세계적 수준의 AI 연구팀을 구축하여 고정밀 AI 검색 분야의 리더로 빠르게 성장해 왔습니다. 이들의 기술은 이미 Anthropic, LangChain, Harvey, Replit 등 가장 발전한 AI 스타트업의 신뢰를 받고 있습니다. 특히 Voyage AI의 임베딩 모델은 Hugging Face 커뮤니티에서 가장 높은 평가를 받은 제로샷 모델입니다. Voyage AI의 모델은 다음을 통해 생성된 출력의 품질을 향상하도록 설계되었습니다: 텍스트, 이미지, PDF 및 정형 데이터 전반의 의미를 더 잘 포착하는 임베딩을 생성하여 벡터 검색을 향상시킵니다. AI 기반 애플리케이션의 검색 결과를 정제하는 고급 순위 재지정 모델을 통해 검색 정확도를 향상시킵니다. 금융 서비스, 의료, 법률과 같은 다양한 산업에 최적화된 미세 조정된 모델과 코드 생성과 같은 사용 사례를 통해 분야별 AI를 활성화합니다. Voyage AI의 검색 기능을 MongoDB에 통합함으로써 조직이 불필요한 복잡성 없이 더 높은 정확도와 신뢰성을 갖춘 AI 애플리케이션을 보다 쉽게 구축할 수 있도록 지원하고 있습니다. Voyage AI가 MongoDB에 통합되는 방식 3단계에 걸쳐 Voyage AI를 MongoDB 와 통합할 예정입니다. 첫 번째 단계에서는 Voyage AI의 텍스트 임베딩, 멀티모달 임베딩 및 순위 재지정 모델이 Voyage AI의 현재 API와 AWS 및 Azure Marketplace를 통해 계속 널리 제공되어 개발자가 동급 최고의 임베딩 및 순위 재지정 기능을 계속 사용할 수 있도록 보장할 것입니다. 또한 Voyage AI 모델 채택 증가를 지원하기 위해 플랫폼의 확장성과 엔터프라이즈 준비성에 투자할 것입니다. 다음으로, 임베딩 생성을 자동으로 처리하는 Vector Search용 자동 임베딩 서비스를 시작으로 Voyage AI의 기능을 MongoDB Atlas에 원활하게 임베딩할 예정입니다. 네이티브 순위 재지정이 뒤따를 예정이므로 개발자는 검색 정확도를 즉시 높일 수 있습니다. 또한 다양한 산업(예: 금융 서비스, 법률 등)이나 사용 사례(예: 코드 생성)를 더 잘 지원하기 위해 분야별 AI 기능을 확장할 계획입니다. 마지막으로, 향상된 멀티모달 기능을 통해 AI 기반 검색을 발전시켜 텍스트, 이미지, 동영상을 원활하게 검색하고 순위를 지정할 수 있도록 할 것입니다. 또한 명령어 조정 모델을 도입하여 개발자가 복잡한 미세 조정 대신 간단한 프롬프트를 사용하여 검색 동작을 개선할 수 있도록 할 계획입니다. 이는 MongoDB Atlas에 수명 주기 관리 기능을 임베딩하여 AI 애플리케이션에 대한 지속적인 업데이트와 실시간 최적화를 보장함으로써 보완될 것입니다. 이것이 개발자와 기업에 의미하는 것 AI 기반 애플리케이션에는 단순히 데이터를 저장, 처리, 보존하는 데이터베이스가 아니라 검색 정확도를 능동적으로 개선하고 원활하게 확장하며 운영상의 마찰을 없애는 데이터베이스가 필요합니다. MongoDB는 Voyage AI를 통해 데이터베이스가 미션 크리티컬 AI 기반 애플리케이션을 지원하는 데 필요한 요건을 재정의합니다. 개발자는 더 이상 외부 임베딩 API, 독립형 벡터 저장소 또는 복잡한 검색 파이프라인을 관리할 필요가 없습니다. AI 검색 기능이 데이터베이스 자체에 내장되어 시맨틱 검색, 벡터 검색, 순위 지정을 기존 쿼리처럼 원활하게 수행할 수 있습니다. 기업에게 이는 가치 실현 시간을 단축하고 AI 애플리케이션 확장에 대한 확신이 높아진다는 것을 의미합니다. 대규모로 고품질의 결과를 제공함으로써 엔터프라이즈는 가장 중요한 사용 사례에 AI를 원활하게 통합하여 신뢰성, 성능 및 실제 영향을 보장할 수 있습니다. 앞으로의 전망: 다음 단계 이것은 시작에 불과합니다. MongoDB의 비전은 MongoDB를 최신 AI 기반 애플리케이션을 위한 가장 강력하고 직관적인 데이터베이스로 만드는 것입니다. Voyage AI 모델은 곧 MongoDB Atlas에서 기본적으로 제공될 예정입니다. MongoDB의 AI 검색 기능을 계속 발전시켜 더욱 스마트하고 적응력이 뛰어나며 다양한 데이터 유형과 사용 사례를 처리할 수 있도록 만들 것입니다. MongoDB에서 Voyage AI의 기능을 사용하는 방법에 대한 자세한 내용은 계속 지켜봐 주세요. MongoDB와 Voyage AI가 지능형 애플리케이션의 구축, 확장, 배포를 위한 최첨단 AI 검색 및 조회를 지원하는 방식에 대해 자세히 알아보려면 제품 페이지 를 방문하세요.

February 24, 2025

为 AI 重新定义数据库:MongoDB 为何收购 Voyage AI

AI 正在重塑各行各业,重新定义客户体验,并改变企业创新、运营和竞争的方式。尽管大部分关注点在前沿模型上,但一项根本的挑战在于数据 — 如何存储和检索数据并让数据为 AI 应用所用。AI 驱动软件的民主化依赖于在正确的抽象层上进行开发,但目前,对于大多数组织来说,大规模创建有用的实时 AI 应用仍然不可行。 挑战不仅在于复杂性,还在于信任。AI 模型是概率性的,这意味着其输出不具有确定性和可预测性。这在当今聊天机器人的幻觉问题中显而易见,并且随着 AI 智能载体的兴起,AI 系统可以自主做出决策,这一点变得更加重要。开发团队需要能够控制、塑造和调整生成的输出,以符合其目标并确保准确性。 AI 驱动的搜索和检索是一项强大的工具,可以从特定来源提取相关的上下文数据,增强 AI 模型,以生成可靠和准确的响应或采取负责任和安全的行动,这在著名的检索增强生成(RAG)方法中得到了体现。在 AI 驱动的检索中,核心是嵌入生成和重新排序 — 这两个关键的 AI 组件能够捕捉数据的语义含义,并评估问询和结果的相关性。我们认为将生成、重新排序以及 AI 驱动的搜索嵌入数据库层可简化堆栈,从而为 AI 应用奠定更可靠的基础。通过将更多智能引入数据库,我们帮助企业减少幻觉,提高可信度,并在 AI 扩展上释放 AI 的全部潜力。 最具影响力的应用需要一个灵活、智能且可扩展的数据基础。因此,我们很高兴地宣布收购了 Voyage AI ,这是一家在嵌入和重新排序模型领域的领导者,通过 AI 驱动的搜索和检索显著提高了准确性。此举不仅旨在增加 AI 功能,更是关乎为 AI 时代重新定义数据库。 为什么这很重要:AI 的未来构建在数据更高的相关性和准确性之上 AI 是概率性的 — 这不像传统软件那样具有预定义的规则和逻辑。相反,它会根据 AI 模型的训练方式和检索到的数据生成响应或采取行动。然而,由于该技术的概率性,AI 可能会出现幻觉。幻觉是检索不佳或不精确的直接后果 — 当 AI 无法访问正确的数据时,它会生成看似合理但不正确的信息。这是 AI 采用的一项关键障碍,尤其是在企业中以及在准确性不可妥协的关键任务用例中。 这使得检索最相关的数据对于 AI 应用程序提供高质量、上下文准确的结果至关重要。如今,开发者依赖于拼凑而成的独立组件来构建 AI 驱动的应用程序。这些组件的次优选择,例如嵌入模型,可能会导致低相关性的数据检索和低质量的生成输出。这种分散的方法对开发者来说既复杂、昂贵、效率低下,又繁琐。 借助 Voyage AI,MongoDB 通过使 AI 驱动的搜索和检索成为数据库的原生功能,解决了这一挑战。开发者无需实施变通方法或管理单独的系统,而是可以从实时操作数据中生成高质量的嵌入,存储向量,执行语义搜索,并优化结果——所有这些都在 MongoDB 中完成。这消除了复杂性,并提供了更高的准确性、更低的延迟和简化的开发者体验。 Voyage AI 为 MongoDB 带来的优势 Voyage AI 已组建了一支以斯坦福大学、麻省理工学院、加州大学伯克利分校和普林斯顿大学为基础的世界级 AI 研究团队,并迅速成为高精度 AI 检索领域的领导者。他们的技术已经被一些最先进的 AI 初创企业所信任,包括 Anthropic、LangChain、Harvey 和 Replit。 值得注意的是,Voyage AI 的嵌入模型是 Hugging Face 社区中评分最高的零样本模型。Voyage AI 的模型旨在通过以下方式提高生成输出的质量: 通过创建更好地捕捉文本、图像、PDF 和结构化数据含义的嵌入来增强向量搜索。 通过先进的重新排序模型提高检索准确性,以优化 AI 驱动式应用的搜索结果。 通过使用针对金融服务、医疗保健、法律等不同行业以及代码生成等使用案例进行优化的微调模型,启用特定领域的 AI。 通过将 Voyage AI 的检索功能集成到 MongoDB 中,我们正在帮助组织更轻松地构建更准确、更可靠的 AI 应用,而不会增加不必要的复杂性。 如何将 Voyage AI 集成到 MongoDB 中 我们将 Voyage AI 与 MongoDB 的集成分为三个阶段。在第一阶段,Voyage AI 的文本嵌入、多模态嵌入和重排序模型将继续通过 Voyage AI 的现有 API 以及 AWS 和 Azure 云市场广泛提供,确保开发者可以继续使用其先进的嵌入和重新排序功能。我们还将投资于平台的可扩展性和企业级就绪能力,以支持 Voyage AI 模型的更广泛采用。 接下来,我们会将 Voyage AI 的功能无缝嵌入到 MongoDB Atlas 中,首先推出用于 Vector Search 的自动嵌入服务,该服务将自动处理嵌入生成。然后将进行原生重新排序,使开发人员能够立即提高检索准确性。我们还计划扩展特定领域的 AI 功能,以更好地支持不同行业(例如,金融服务、法律等)或用例(例如,代码生成)。 最后,我们将通过增强的多模态功能推进 AI 驱动的检索,实现文本、图像和视频的无缝检索和排序。我们还计划引入指令调整模型,允许开发者使用简单的提示而不是复杂的微调来优化搜索行为。这将通过在 MongoDB Atlas 中嵌入生命周期管理来实现补充,确保 AI 应用的持续更新和实时优化。 这对开发者和企业意味着什么? AI 驱动的应用需要的不仅仅是一个存储、处理和持久化数据的数据库,而是还需要一个能够主动提高检索准确性、无缝扩展并消除操作摩擦的数据库。借助 Voyage AI,MongoDB 重新定义了支撑任务关键型 AI 驱动的应用的数据库要求。 开发者将不再需要管理外部嵌入 API、独立运行的实例向量存储或复杂的搜索管道。AI 检索将内建到数据库中,实现与传统查询一样的无缝语义搜索、向量检索和排序。 对于企业来说,这意味着能够更加信心十足地扩展 AI 应用,并加快价值实现速度。通过在大规模扩展交付高质量的结果,企业可以将 AI 无缝集成到其最关键的用例中,确保可靠性、性能和实际影响。 展望未来:接下来会发生什么 这仅仅是个开始。我们的愿景是将 MongoDB 打造成最强大且直观的数据库,适用于现代 AI 驱动的应用程序。 Voyage AI 的模型将很快在 MongoDB Atlas 中原生可用。 我们将继续提升 MongoDB 的 AI 检索能力,使其更智能、更具适应性,并能够处理更广泛的数据类型和应用场景。 请继续关注最新动态,详细了解如何在 MongoDB 中开始使用 Voyage AI 功能。 要了解更多关于 MongoDB 和 Voyage AI 如何为构建、扩展和部署智能应用提供最先进的 AI 搜索和检索功能的信息,请访问我们的 产品页面 。

February 24, 2025