richard-huie-buckius

2997 results

MongoDB’s 2024 Year in Review

It’s hard to believe that another year is almost over! 2024 was a transformative year for MongoDB, and it was marked by both innovation and releases that further our commitment to empowering customers, developers, and partners worldwide. So without further ado, let’s dive into MongoDB’s 2024 highlights. We’ll also share our executive team’s predictions of what 2025 might have in store. A look back at 2024 MongoDB 8.0: The most performant version of MongoDB ever In October we released MongoDB 8.0 , the fastest, most resilient, secure, and reliable version of MongoDB yet. Architectural optimizations in MongoDB 8.0 have significantly improved the database’s performance, with 36% faster reads and 59% higher throughput for updates. Our new architecture also makes horizontal scaling cheaper and faster. Finally, working with encrypted data is easier than ever, thanks to the addition of range queries in Queryable Encryption (which allows customers to encrypt, store, and perform queries directly on data). Whether you’re a startup building your first app, or you’re a global enterprise managing mission-critical workloads, MongoDB 8.0 offers unmatched power and flexibility, solidifying MongoDB’s place as the world’s most popular document database. Learn more about what makes 8.0 the best version of MongoDB ever on the MongoDB 8.0 page . Delivering customer value with the MongoDB AI Applications Program AI applications have become a cornerstone of modern software, and MongoDB is committed to equipping customers with the technology, tools, and support they need to succeed on their AI journey. That’s why we launched the MongoDB AI Applications Program (MAAP) in 2024, a comprehensive program designed to accelerate the development of AI applications. By offering customers resources like access to AI specialists, an ecosystem of leading AI and tech companies, and AI architectural best practices supported by integrated services, MAAP helps solve customers’ most pressing business challenges, unlocks competitive advantages, and accelerates time to value for AI investments. Overall, MAAP’s aim is to set customers on the path to AI success. Visit the MongoDB AI Applications Program page or watch our session from AWS re:Invent to learn more! Advancing AI with MongoDB Atlas Vector Search In 2024, MongoDB further cemented its role in the AI space with enhancements to MongoDB Atlas Vector Search . Recognized in 2024 (for the second consecutive year!) as one of the most loved vector databases , MongoDB continues to provide a scalable, unified, and secure platform for building cutting-edge AI use cases. Recent advancements like vector quantization in Atlas Vector Search help deliver even more value to our customers, enabling them to scale applications to billions of vectors at a lower cost. Head over to our Atlas Vector Search quick start guide to get started with Atlas Vector Search today, or visit our AI resources hub to learn more about how MongoDB can power AI applications. Search Nodes: Performance at scale Search functionality is indispensable in modern applications, and with Atlas Search Nodes, organizations can now optimize their search workloads like never before. By providing dedicated infrastructure for Atlas Search and Vector Search workloads, Search Nodes ensure high performance (e.g., a 40–60% decrease in query times), scalability, and reliability, even for the most demanding use cases. As of this year , Search Nodes are generally available across AWS, Google Cloud, and Microsoft Azure. This milestone underscores MongoDB’s commitment to delivering powerful solutions that scale alongside our customers’ needs. To learn more about Search Nodes, check out our documentation or watch our tutorial . Looking ahead: MongoDB’s 2025 predictions After the excitement of the past few years, 2025 will be defined by ensuring that technology investments deliver tangible value. Organizations remain excited about the potential AI and emerging technologies hold to solve real business challenges, but are increasingly focused on maintaining a return on investment. “Enterprises need to innovate faster than ever, but speed is no longer the only measure of success. Increasingly, organizations are laser-focused on ensuring that their technology investments directly address critical business challenges and provide clear ROI and competitive advantage—whether it’s optimizing supply chains, delivering hyper-personalized customer experiences, or scaling operations efficiently,” said Sahir Azam, Chief Product Officer at MongoDB. “In 2025, I expect to see organizations make significant strides in driving this innovation and efficiency by applying AI to more production use cases and by maturing the way they leverage their data to build compelling and differentiated customer experiences.” Indeed, we expect to see organizations make more strategic investments in emerging technologies like gen AI—innovating with a sharp focus on solving business challenges. “In 2025, we can expect the focus to shift from ‘what AI can do’ to ‘what AI should do,’ moving beyond the hype to a clearer understanding of where AI can provide real value and where human judgment is still irreplaceable,” said Tara Hernandez, VP of Developer Productivity at MongoDB. “As we advance, I think we’ll see organizations begin to adopt more selective, careful applications of AI, particularly in areas where stakes are high, such as healthcare, finance, and public safety. A refined approach to AI development will be essential—not only for producing quality results but also to build trust, ensuring these tools genuinely support human goals rather than undermining them.” With more capable, accessible application development tools and customer-focused programs like MAAP at developers’ fingertips, 2025 is an opportunity to make a data-driven impact faster than ever before. "Right now, organizations have an opportunity to leverage their data to reimagine how they do business, to more effectively adapt to a changing world, and to revolutionize our quality of life,” said Andrew Davidson, SVP of Products at MongoDB. “By harnessing our latest technologies, developers can build a foundation for a transformative future." Head over to our updates page to learn more about the new releases and updates from MongoDB in 2024. Keep an eye on our events page to learn what's to come from MongoDB in 2025!

December 19, 2024

MongoDB Atlas Integration with Ably Unlocks Real-time Capabilities

Enterprises across sectors increasingly realize that data, like time, doesn’t wait. Indeed, harnessing and synchronizing information in real time is the new currency of business agility. Enter the alliance between MongoDB and Ably—a partnership that has led to Ably's new database connector for MongoDB Atlas . The new database connector provides a robust framework for businesses to create real-time, data-intensive applications that can provide top-notch user experiences thanks to an opinionated client SDK to be used on top of LiveSync, ensuring both data integrity and real-time consistency—without compromising your existing tech stack. The synergy of MongoDB Atlas and Ably LiveSync This new MongoDB Atlas-Ably integration tackles a fundamental challenge in modern application architecture: maintaining data consistency across distributed systems in real-time. MongoDB Atlas serves as the foundation—a flexible, scalable database service that adapts to the ebb and flow of data demands. Meanwhile, Ably LiveSync acts as the nervous system, ensuring that every change, every update, resonates instantly across the entire application ecosystem. The Ably LiveSync database connector for MongoDB Atlas offers a transformative approach to real-time data management, combining unparalleled scalability with seamless synchronization. This solution effortlessly adapts to growing data volumes and expanding user bases, catering to businesses of all sizes—from agile startups to established enterprises. By rapidly conveying database changes to end-users, it ensures that all stakeholders operate from a single, up-to-date source of truth, fostering data consistency across the entire organization. At its core, LiveSync is built with robust resilience in mind, featuring built-in failover mechanisms and connection recovery capabilities. This architecture provides businesses with the high availability they need to maintain continuous operations in today's always-on digital landscape. Moreover, by abstracting away the complexities of real-time infrastructure, LiveSync empowers developers to focus on creating features that drive business value. This focus on developer productivity, combined with its scalability and reliability, positions Ably LiveSync for MongoDB Atlas as a cornerstone technology for companies aiming to harness the power of real-time data synchronization. Figure 1: Ably real-time integration with MongoDB Atlas. Industry transformation: A real-time revolution This new integration has a number of implications across various sectors. For example, in the banking and financial services sector , the MongoDB Atlas-Ably integration enables instantaneous fraud detection systems that can promptly react to potential threats. Live trading platforms benefit as well, seamlessly updating to reflect every market change as it happens. Banking applications are equally enhanced, with real-time updating of account balances and transactions, ensuring that users always have access to the most recent financial information. In the retail industry , meanwhile, the integration facilitates real-time inventory management across both physical and online stores, ensuring that supply matches demand at all times. This capability supports dynamic pricing strategies that can adapt instantly to fluctuations in consumer interest, and it powers personalized shopping experiences with live product recommendations tailored to individual customer preferences. Manufacturing and mobility sectors also see transformative benefits. With the capability for real-time monitoring of production lines, businesses can implement just-in-time manufacturing processes, streamlining operations and reducing waste. Real-time tracking of vehicles and assets enhances logistics efficiency, while predictive maintenance systems provide foresight into potential equipment failures, allowing for timely interventions. The healthcare sector stands to gain significantly from this technology. Real-time patient monitoring systems offer healthcare providers immediate alerts, ensuring swift medical responses when necessary. Electronic health records receive seamless updates across multiple care settings, promoting coherent patient care. Efficient resource allocation is achieved through live tracking of hospital beds and equipment, optimizing hospital operations. Insurance companies are not left out of this technological leap. The integration allows for dynamic risk assessment and pricing models that adapt in real-time, refining accuracy and responsiveness. Instant claim processing and status updates enhance customer satisfaction, while live tracking of insured assets facilitates more accurate underwriting and expedites the resolution of claims. Finally, in telecommunications and media this integration promises buffer-free content delivery and streaming services, vastly improving the end-user experience. real-time network performance monitoring enables proactive issue resolution, maintaining service quality. Users can enjoy synchronized experiences across multiple devices and platforms, fostering seamless interaction with digital content. Today's business imperative As industries continue to evolve at a rapid pace, the integration of MongoDB Atlas and Ably LiveSync provides a compelling way for businesses to not only keep up but lead the real-time revolution. For IT decision-makers looking to put their organizations at the forefront of innovation, this integration turns static data into a dynamic driver of business growth and market leadership. Access MongoDB Atlas and Ably LiveSync Resources and start your journey towards real-time innovation today. Learn more about how MongoDB Atlas can power industry-specific solutions .

December 18, 2024

Leveraging BigQuery JSON for Optimized MongoDB Dataflow Pipelines

We're delighted to introduce a major enhancement to our Google Cloud Dataflow templates for MongoDB Atlas. By enabling direct support for JSON data types, users can now seamlessly integrate their MongoDB Atlas data into BigQuery, eliminating the need for complex data transformations. This streamlined approach not only saves users time and resources, but it also empowers customers to unlock the full potential of their data through advanced data analytics and machine learning. Figure 1: JSON feature for user options on Dataflow Templates Limitations without JSON support Traditionally, Dataflow pipelines designed to handle MongoDB Atlas data often necessitate the transformation of data into JSON strings or flattening complex structures to a single level of nesting before loading into BigQuery. Although this approach is viable, it can result in several drawbacks: Increased latency: The multiple data conversions required can lead to increased latency and can significantly slow down the overall pipeline execution time. Higher operational costs: The extra data transformations and storage requirements associated with this approach can lead to increased operational costs. Reduced query performance: Flattening complex document structures in JSON String format can impact query performance and make it difficult to analyze nested data. So, what’s new? BigQuery's Native JSON format addresses these challenges by enabling users to directly load nested JSON data from MongoDB Atlas into BigQuery without any intermediate conversions. This approach offers numerous benefits: Reduced operating costs: By eliminating the need for additional data transformations, users can significantly reduce operational expenses, including those associated with infrastructure, storage, and compute resources. Enhanced query performance: BigQuery's optimized storage and query engine is designed to efficiently process data in Native JSON format, resulting in significantly faster query execution times and improved overall query performance. Improved data flexibility: users can easily query and analyze complex data structures, including nested and hierarchical data, without the need for time-consuming and error-prone flattening or normalization processes. A significant advantage of this pipeline lies in its ability to directly leverage BigQuery's powerful JSON functions on the MongoDB data loaded into BigQuery. This eliminates the need for a complex and time-consuming data transformation process. The JSON data within BigQuery can be queried and analyzed using standard BQML queries. Whether you prefer a streamlined cloud-based approach or a hands-on, customizable solution, the Dataflow pipeline can be deployed either through the Google Cloud console or by running the code from the github repository . Enabling data-driven decision-making To summarize, Google’s Dataflow template provides a flexible solution for transferring data from MongoDB to BigQuery. It can process entire collections or capture incremental changes using MongoDB's Change Stream functionality. The pipeline's output format can be customized to suit your specific needs. Whether you prefer a raw JSON representation or a flattened schema with individual fields, you can easily configure it through the userOption parameter. Additionally, data transformation can be performed during template execution using User-Defined Functions (UDFs). By adopting BigQuery Native JSON format in your Dataflow pipelines, you can significantly enhance the efficiency, performance, and cost-effectiveness of your data processing workflows. This powerful combination empowers you to extract valuable insights from your data and make data-driven decisions. Follow the Google Documentation to learn how to set up the Dataflow templates for MongoDB Atlas and BigQuery. Get started with MongoDB Atlas on Google Marketplace . Learn more about MongoDB Atlas on Google Cloud on our product page .

December 17, 2024

Commerce at Scale: Zepto Reduces Latency by 40% With MongoDB

Zepto is one of the fastest-growing Indian startups and a pioneer in introducing quick commerce to India. Quick commerce, sometimes referred to as “Q-commerce” is a new, faster form of e-commerce promising ultra-quick deliveries, typically in less than one hour. Founded in July 2021, Zepto has revolutionized the Indian grocery delivery industry, offering users a choice of over 15,000 products with a promised 10-minute delivery. Since its launch, the company has rapidly expanded its operations, recording 20% monthly growth and achieving annualized sales of $1.5 billion by July 2024. Zepto’s order processing and delivery system is instrumental in meeting its promise to customers. Zepto’s system routes new orders to a “dark store,” where bleeding-edge assignment systems help pack orders in under 75 seconds. A proprietary navigation system ensures riders can then deliver these orders promptly. As Zepto expanded, its monolithic infrastructure, based on a relational SQL database, could not achieve the scalability and operational efficiency the company needed. Zepto changed the game by turning to MongoDB Atlas . Mayank Agarwal, Senior Architect at Zepto, shared the company’s journey with MongoDB during a presentation at MongoDB.local Bengaluru in September 2024 . “We had a big monolith. All the components were being powered by PostgreSQL and a few Redis clusters,” said Agarwal. “As our business was scaling, we were facing a lot of performance issues, as well as restrictions in terms of the velocity at which we wanted to operate.” Zepto’s legacy architecture posed four key issues: Performance bottlenecks: As Zepto grew, the need for complex database queries increased. These queries required multiple joins, which put a significant strain on the system, resulting in high CPU usage and an inability to provide customers and delivery partners with accurate data. Latency: Zepto needed its API response times to be fast. However, as the system grew, background processing tasks slowed down. This led to delays and caused the system to serve stale data to customers. A need for real-time analytics: Teams on the ground, such as packers and riders, required real-time insights on stock availability and performance metrics. Building an extract, transform, and load (ETL) pipeline for this was both time-consuming and resource-intensive. Increased data scaling requirements: Zepto’s data was growing exponentially. Managing it efficiently became increasingly difficult, especially when real-time archival and retrieval were required. MongoDB Atlas meets Zepto’s goals “We wanted to break our monolith into microservices and move to a NoSQL database . But we wanted to evaluate multiple databases,” said Agarwal. Zepto was looking for a document database that would let its team query data even when the documents were structured in a nested fashion. The team also needed queryability on array-based attributes or columns. MongoDB fulfilled both use cases. “Very optimally, we were able to do some [proofs of concept]. The queries were very performant, given the required indexes we had created, and that gave us confidence,” said Agarwal. “The biggest motivation factor was when we saw that MongoDB provides in-memory caching , which could address our huge Redis cluster that we couldn’t scale further.” Beyond scalability, MongoDB Atlas also provided high reliability and several built-in capabilities. That helped Zepto manage its infrastructure day to day, and create greater efficiencies for both its end users and its technical team. Speaking alongside Agarwal at MongoDB.local Bengaluru, Kshitij Singh, Technical Lead for Zepto, explained: “When we discovered MongoDB Atlas, we saw that there were a lot of built-in features like the MongoDB chat support , which gave us very qualitative insights whenever we faced any issues. That was an awesome experience for us.” Data archival , sharding support , and real-time analytic capabilities were also key in helping the Zepto team improve operational efficiencies. With MongoDB, Zepto was able to deploy new features more quickly. Data storage at the document level meant less management overhead and faster time to market for new capabilities. Furthermore, MongoDB’s archival feature made it easier for Zepto to manage large datasets. The feature also simplified the setup of secondary databases for ETL pipelines, reducing the heavy lifting for developers. “You go on the MongoDB Atlas platform and can configure archival in just one click,” said Singh. Zepto reduces latency, handles six times more traffic, and more The results of migrating to MongoDB Atlas were immediate and significant: Zepto saw a 40% reduction in latency for some of its most critical APIs, which directly improved the customer experience. Postmigration, Zepto’s infrastructure could handle six times more traffic than before, without any degradation in performance. This scalability enabled the company to continue its rapid growth without bottlenecks. Page load times improved by 14% , leading to higher conversion rates and increased sales. MongoDB’s support for analytical nodes helped Zepto segregate customer-facing workloads from internal queries. This ensured that customer performance was never compromised by internal reporting or analytics. “MongoDB is helping us grow our business exponentially,” said Agarwal at the end of his presentation. Visit our product page to learn more about MongoDB Atlas.

December 17, 2024

Checkpointers and Native Parent Child Retrievers with LangChain and MongoDB

MongoDB and LangChain, the company known for its eponymous large language model (LLM) application framework, are excited to announce new developments in an already strong partnership. Two additional enhancements have just been added to the LangChain codebase, making it easier than ever to build cutting-edge AI solutions with MongoDB. Checkpointer support In LangGraph, LangChain’s library for building stateful, multi-actor applications with LLMs, memory is provided through checkpointers . Checkpointers are snapshots of the graph state at a given point in time. They provide a persistence layer, allowing developers to interact and manage the graph’s state. This has a number of advantages for developers—human-in-the-loop, "memory" between interactions, and more. Figure adapted from “Launching Long-Term Memory Support in LangGraph”. LangChain Blog. Oct. 8, 2024. https://blog.langchain.dev/launching-long-term-memory-support-in-langgraph/ MongoDB has developed a custom checkpointer implementation, the " MongoDBSaver " class, that, with just a MongoDB URI (local or Atlas ), can easily store LangGraph state in MongoDB. By making checkpointers a first-class feature, developers can have confidence that their stateful AI applications built on MongoDB will be performant. That’s not all, since there are actually two new checkpointers as part of this implementation— one synchronous and one asynchronous . This versatility allows the new functionality to be even more versatile, and serving developers with a myriad of use cases. Both implementations include helpful utility functions to make using them painless, letting developers easily store instances of StateGraph inside of MongoDB. A performant persistence layer that stores data in an intuitive way will mean a better end-user experience and a more robust system, no matter what a developer is building with LangGraph. Native parent child retrievers Second, MongoDB has implemented a native parent child retriever inside LangChain. This approach enhances the performance of retrieval methods utilizing the retrieval-augmented Generation (RAG) technique by providing the LLM with a broader context to consider. In essence, we divide the original documents into relatively small chunks, embed each one, and store them in MongoDB. Using such small chunks (a sentence or a couple of sentences) helps the embedding models to better reflect their meaning. Now developers can use " MongoDBAtlasParentDocumentRetriever " to persist one collection for both vector and document storage. In this implementation, we can store both parent and child documents in a single collection while only having to compute and index embedding vectors for the chunks. This has a number of performance advantages because storing vectors with their associated documents means no need to join tables or worry about painful schema migrations. Additionally, as part of this work, MongoDB has also added a " MongoDBDocStore " class which provides many helpful utility functions. It is now easier than ever to use documents as a key-value store and insert, update, and delete them with ease. Taken together, these two new classes allow developers to take full advantage of MongoDB’s abilities. MongoDB and LangChain continue to be a strong pair for building agentic AI—combining performance and ease of development to provide a developer-friendly experience. Stay tuned as we build out additional functionality! To learn more about these LangChain integrations, here are some resources to get you started: Check out our tutorial . Experiment with checkpointers and native parent child retrievers to see their utility for yourself. Read the previous announcement with LangChain about AI Agents, Hybrid Search, and Indexing.

December 16, 2024

Building Gen AI with MongoDB & AI Partners | November 2024

Unless you’ve been living under a rock, you know it’s that time of year again—re:Invent season! Last week, I was in Las Vegas for AWS re:Invent, one of our industry’s most important annual conferences. re:Invent 2024 was a whirlwind of keynote speeches, inspirational panels and talks, and myriad ways to spend time with colleagues and partners alike. And this year, MongoDB had its biggest re:Invent presence ever, alongside some of the most innovative players in AI. The headline? The MongoDB AI Application Program (MAAP) . Capgemini, Confluent, IBM, QuantumBlack AI by McKinsey, and Unstructured joined MAAP, boosting the value customers receive from the program and cementing MongoDB’s position as a leader in driving AI innovation. We also announced that MongoDB is collaborating with Meta to support developers with Meta models and the end-to-end MAAP technology stack. Figure 1: The MongoDB booth at re:Invent 2024 MongoDB’s re:Invent AI Showcase was another showstopper. As part of the AI Hub in the re:Invent expo hall, MongoDB and partners Arcee, Arize, Fireworks AI, and Together AI collaborated on engaging demos and presentations. Meanwhile, the “ Building Your AI Stack ” panel—which included leaders from MongoDB and MAAP partners Anyscale, Cohere, and Fireworks AI—featured an insightful discussion on building AI technologies, challenges with taking applications to production, and what’s next in AI. As at every re:Invent, networking opportunities abounded; I had so many interesting and fruitful conversations with partners, customers, and developers during the week’s many events, including those MongoDB sponsored—like the Cabaret of Innovation with Accenture, Anthropic, and AWS; the Galactic Gala with Cohere; and Tuesday’s fun AI Game Night with Arize, Fireworks AI, and Hasura. Figure 2: Networking at the Galactic Gala Whether building solutions or building relationships, MongoDB’s activities at re:Invent 2024 showcased the importance of collaboration to the future of AI. As we close out the year, I’d like to thank our amazing partners for their support—we look forward to more opportunities to collaborate in 2025! And if you want to learn more about MongoDB’s announcements at re:Invent 2024, please read this blog post by my colleague Oliver Tree. Welcoming new AI and tech partners In November, we also welcomed two new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Braintrust Braintrust is an end-to-end platform for building and evaluating world-class AI apps. “ We're excited to partner with MongoDB to share how you can build reliable and scalable AI applications with vector databases,” said Ankur Goyal, CEO of Braintrust. “By combining Braintrust’s simple evaluation workflows with MongoDB Atlas, developers can build an end-to-end RAG application and iterate on prompts and models without redeploying their code.” Langtrace Langtrace is an open-source observability tool that collects and analyzes traces in order to help you improve your LLM apps. “ We're thrilled to join forces with MongoDB to help companies trace, debug, and optimize their RAG features for faster production deployment and better accuracy,” said Karthik Kalyanaraman, Co-founder and CTO at Langtrace AI. “MongoDB has made it dead simple to launch a scalable vector database with operational data. Our collaboration streamlines the RAG development process by empowering teams with database observability, speeding up time to market and helping companies get real value to customers faster.” But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

December 12, 2024

Binary Quantization & Rescoring: 96% Less Memory, Faster Search

We are excited to share that several new vector quantization capabilities are now available in public preview in MongoDB Atlas Vector Search : support for binary quantized vector ingestion, automatic scalar quantization, and automatic binary quantization and rescoring. Together with our recently released support for scalar quantized vector ingestion , these capabilities will empower developers to scale semantic search and generative AI applications more cost-effectively. For a primer on vector quantization, check out our previous blog post . Enhanced developer experience with native quantization in Atlas Vector Search Effective quantization methods—specifically scalar and binary quantization—can now be done automatically in Atlas Vector Search. This makes it easier and more cost-effective for developers to use Atlas Vector Search to unlock a wide range of applications, particularly those requiring over a million vectors. With the new “quantization” index definition parameters, developers can choose to use full-fidelity vectors by specifying “none,” or they can quantize vector embeddings by specifying the desired quantization type—”scalar” or “binary” (Figure 1). This native quantization capability supports vector embeddings from any model provider as well as MongoDB’s BinData float32 vector subtype . Figure 1: New index definition parameters for specifying automatic quantization type in Atlas Vector Search Scalar quantization—converting a float point into an integer—is generally used when it's crucial to maintain search accuracy on par with full-precision vectors. Meanwhile, binary quantization—converting a float point into a single bit of 0 or 1—is more suitable for scenarios where storage and memory efficiency are paramount, and a slight reduction in search accuracy is acceptable. If you’re interested in learning more about this process, check out our documentation . Binary quantization with rescoring: Balance cost and accuracy Compared to scalar quantization, binary quantization further reduces memory usage, leading to lower costs and improved scalability—but also a decline in search accuracy. To mitigate this, when “binary” is chosen in the “quantization” index parameter, Atlas Vector Search incorporates an automatic rescoring step, which involves re-ranking a subset of the top binary vector search results using their full-precision counterparts, ensuring that the final search results are highly accurate despite the initial vector compression. Empirical evidence demonstrates that incorporating a rescoring step when working with binary quantized vectors can dramatically enhance search accuracy, as shown in Figure 2 below. Figure 2: Combining binary quantization and rescoring helps retain search accuracy by up to 95% And as Figure 3 shows, in our tests, binary quantization reduced processing memory requirement by 96% while retaining up to 95% search accuracy and improving query performance. Figure 3: Improvements in Atlas Vector Search with the use of vector quantization It’s worth noting that even though the quantized vectors are used for indexing and search, their full-fidelity vectors are still stored on disk to support rescoring. Furthermore, retaining the full-fidelity vectors enables developers to perform exact vector search for experimental, high-precision use cases, such as evaluating the search accuracy of quantized vectors produced by different embedding model providers, as needed. For more on evaluating the accuracy of quantized vectors, please see our documentation . So how can developers make the most of vector quantization? Here are some example use cases that can be made more efficient and scaled effectively with quantized vectors: Massive knowledge bases can be used efficiently and cost-effectively for analysis and insight-oriented use cases, such as content summarization and sentiment analysis. Unstructured data like customer reviews, articles, audio, and videos can be processed and analyzed at a much larger scale, at a lower cost and faster speed. Using quantized vectors can enhance the performance of retrieval-augmented generation (RAG) applications. The efficient processing can support query performance from large knowledge bases, and the cost-effectiveness advantage can enable a more scalable, robust RAG system, which can result in better customer and employee experience. Developers can easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s flexible document model lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. The relevance of search results or context for large language models (LLMs) can be improved by incorporating larger volumes of vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. To get started with vector quantization in Atlas Vector Search, see the following developer resources: Documentation: Vector Quantization in Atlas Vector Search Documentation: How to Measure the Accuracy of Your Query Results Tutorial: How to Use Cohere's Quantized Vectors to Build Cost-effective AI Apps With MongoDB

December 12, 2024

IntellectAI Unleashes AI at Scale With MongoDB

IntellectAI , a business unit of Intellect Design Arena , is a trailblazer in AI. Since 2019 the company has been using MongoDB to drive a number of innovative use cases in the banking, financial services, and insurance (BFSI) industry. For example, Intellect Design Arena’s broader insurance business has been using MongoDB Atlas as a foundation for its architecture. Atlas’s flexibility enables Intellect Design Arena to manage varied and constantly evolving datasets and increase operational performance. Building on this experience, the company looked at deepening its use of MongoDB Atlas’s unique AI and search capabilities for its new IntellectAI division. IntellectAI Partner and Chief Technology Officer Deepak Dastrala spoke on the MongoDB.local Mumbai stage in September 2024 . Dastrala shared how the company has built a powerful, scalable, and highly accurate AI platform-as-a-service offering, Purple Fabric , using MongoDB Atlas and Atlas Vector Search . Using AI to generate actionable compliance insights for clients Purple Fabric helps transform enterprise data into actionable AI insights and solutions by making data ready for retrieval-augmented generation (RAG). The platform collects and analyzes structured and unstructured enterprise data, policies, market data, regulatory information, and tacit knowledge to enable its AI Expert Agent System to achieve precise, goal-driven outcomes with accuracy and speed. A significant part of IntellectAI’s work involves assessing environmental, social, and governance (ESG) compliance. This requires companies to monitor diverse nonfinancial factors such as child labor practices, supply chain ethics, and biodiversity. “Historically, 80% to 85% of AI projects fail because people are still worried about the quality of the data. With Generative AI, which is often unstructured, this concern becomes even more significant,” said Deepak Dastrala. According to Deepak Dastrala, the challenge today is less about building AI tools than about operationalizing AI effectively. A prime example of this is IntellectAI’s work with one of the largest sovereign wealth funds in the world, which manages over $1.5 trillion across 9,000 companies. The fund sought to utilize AI for making responsible investment decisions based on millions of unique data points across those companies, including compliance, risk prediction, and impact assessment. This included processing both structured and unstructured data to enable the fund to make informed, real-time decisions. “We had to process almost 10 million documents in more than 30 different data formats—text and image—and correlate both structured and unstructured data to provide those particular hard-to-find insights,” said Dastrala. “We ingested hundreds of millions of vectors across these documents, and this is where we truly understood the power of MongoDB.” For example, by leveraging MongoDB's capabilities, including time series collections, IntellectAI simplifies the processing of unstructured and semi-structured data from companies' reports over various years, extracting key performance metrics and trends to enhance compliance insights. “MongoDB Atlas and Vector Search give us flexibility around the schema and how we can turn particular data into knowledge,” Dastrala said. For Dastrala, there are four unique advantages of working with MongoDB—particularly using MongoDB Atlas Vector Search—that other companies should consider when building long-term AI strategies: a unified data model, multimodality, dynamic data linking, and simplicity. “For me, the unified data model is a really big thing because a stand-alone vector database will not help you. The kind of data that you will continue to ingest will increase, and there are no limits. So whatever choices that you make, you need to make the choices from the long-term perspective,” said Dastrala. Delivering massive scale, driving more than 90% AI accuracy, and accelerating decision-making with MongoDB Before IntellectAI built this ESG capability, its client relied on subject matter experts, but they could examine only a limited number of companies and datasets and were unable to scale their investigation of portfolios or information. “If you want to do it at scale, you need proper enterprise support, and that’s where MongoDB became really handy for us. We are able to give 100% coverage and do what the ESG analysts were able to do for this organization almost a thousand times faster,” said Dastrala. Previously, analysts could examine only between 100 and 150 companies. With MongoDB Atlas and Atlas Vector Search, Purple Fabric can now process information from over 8,000 companies across the world, covering different languages and delivering more than 90% accuracy. “Generally, RAG will probably give you 80% to 85% accuracy. But in our case, we are talking about a fund deciding whether to invest billions or not in a company, so the accuracy should be 90% minimum,” said Dastrala. “What we are doing is not ‘simple search’; it is very contextual, and MongoDB helps us provide that high-dimension data.” Concluding the presentation speech on the MongoDB.local stage, Dastrala reminded the audience why IntellectAI is using MongoDB’s unique capabilities to support its long-term vision: “Multimodality is very important because today we are using text and images, but tomorrow we might use audio, video, and more. And don’t forget, from a developer perspective, how important it is to keep the simplicity and leverage all the options that MongoDB provides.” This is just the beginning for IntellectAI and its Purple Fabric platform. “Because we are doing more and more with greater accuracy, our customers have started giving us more problems to solve. And this is absolutely happening at a scale [that] is unprecedented,” said Dastrala. Using MongoDB Atlas to drive broader business benefits across Intellect Design The success encountered with the Purple Fabric platform is leading Intellect Design’s broader business to look at MongoDB Atlas for more use cases. Intellect Design is currently in the process of migrating more of its insurance and Wealth platforms onto MongoDB Atlas, as well as leveraging the product family to support the next phase of its app modernization strategy. Using MongoDB Atlas, Intellect Design aims to improve resilience, support scalable growth, decrease time to market, and enhance data insights. Head over to our product page to learn more about MongoDB Atlas . To learn more about how MongoDB Atlas Vector Search can help you build or deepen your AI and search capabilities, visit our Vector Search page .

December 12, 2024

中華電信重塑客戶服務體驗 MongoDB Atlas助攻效能飆升10倍

因應消費需求多元化 彈性資費方案成主流 在行動通訊技術持續進化的今日,身為臺灣三大電信公司之一的 中華電信 ,近幾年也加速推動數位轉型,除積極佈建更綿密的基地臺外,也透過彈性網路資費與產品組合,全力爭取更多新用戶加入。 中華電信資訊技術分公司高級技術工程師曹漢清指出:「過去幾年我們持續強化核心能力,並透過結盟、合作積極開發行動商務、網路應用以及寬頻影音多媒體等新穎服務。MongoDB Atlas讓我們能精準掌握消費者需求,並提供更彈性網路資費組合,維持在市場上領先優勢。」 挑戰 關聯式資料庫限制多 難以回應客戶期待 為提供更好服務品質,中華電信以 TM Forum ODA定義產品管理系統與客戶互動服務,然而面臨來自用戶的大量查詢。中華電信團隊深入分析之後,發現既有關聯式資料庫架構存在三大挑戰,分別是存在欄位擴充不易、處理能力有限、欄位長度限制等。 曹漢清表示,「為此,我們決定改用關聯式資料庫搭配NoSQL資料庫的作法,解決前述種種問題之外,也能迎合ESG浪潮、強化數位韌性等趨勢。最終,我們決定選用MongoDB Atlas 服務,期盼為客戶提供更好的使用者體驗。」 解決方案 多雲架構、合規安全 中華電信青睞關鍵 MongoDB Atlas 服務吸引中華電信採用的主因,首先是支援多區、多雲架構,企業在三大公有雲平臺上都可使用該服務。其次在合規安全部分,MongoDB Atlas 符合 ISO27001、HIPAA、PCI、 GDPR 等規範,且通過 FedRAMP 認證。此外,MongoDB Atlas 服務也提供多元、彈性的方案組合。 中華電信深入分析問題並提出創新解決方案,展現了對新興技術的高度理解與應用能力。這不僅加速了 MongoDB Atlas 的導入,也顯著提升了整體系統效能。 曹漢清表示,在 MongoDB技術團隊協助下,我們順利達成簡化災損應變程序的目標。當發生網路連線品質不佳或者單一節點發生非預期故障時,客戶互動服務系統會自動切換 Primary/Secondary MongoDB 資料庫,避免資料遺失。 從專案啟動到上線,雙方團隊緊密配合,展現了卓越的技術能力與合作精神。中華電信在每一個階段都表現出積極的參與態度和對細節的高度重視。無論是架構規劃、容量設計,還是系統測試與部署,中華電信團隊都顯示出強大的執行和協作能力。 穩定性、效能同步改善 滿足大量查詢需求 中華電信 APP 總下載數超過 800萬次以上,扮演中華電信與客戶之間的重要觸點。採用MongoDB Atlas 服務的讀寫分離架構上線後,批次入檔效能提升6倍、 月報表運算速度提升20倍。在系統穩定度提升部分,存取高峰 (Queries Per Second,QPS)可達200-300次,所以再也不會發生「timeout」異常狀況。 在資料庫維護方面,中華電信得以減少管理人力和降低維運風險。管理團隊能運用平臺上的監控管理工具,即時掌握效能狀況。 展望未來 MongoDB 與中華電信在技術創新與市場拓展上擁有共同的願景。中華電信對數據技術的投入與追求,與 MongoDB「賦能企業數據創新」的使命高度契合。雙方的合作不僅限於技術交流,更是攜手為客戶提供卓越體驗的承諾。 中華電信長遠計劃在MongoDB Atlas 服務基礎上,持續優化客服互動服務系統的品質,全力維持臺灣電信市場中的領先地位。 「不光MongoDB Atlas 服務的品質、速度與可靠度令人滿意,還能享有專業的全託管資料庫管理服務。另外原廠在地化團隊技術支援讓人印象深刻,協助中華電信將原有地端的資料遷移到 MongoDB Atlas上,過程中和原廠專案團隊就架構面、容量規劃、後續日常維運等細節作過詳細的討論,並且從旁協助中華團隊作上線前測試,讓專案能無縫順利依照時程上線,對中華電信整體服務品質帶來極大助益。」-中華電信資訊技術分公司高級技術工程師曹漢清

December 12, 2024

Away From the Keyboard: Everton Agner, Staff Software Engineer

We’re back with a new article in our ongoing “Away From the Keyboard” series, featuring in-depth interviews with people at MongoDB, discussing what they do, how they prioritize time away from their work, and approach to coding. Everton Agner, Staff Software Engineer at MongoDB, talked to us about why team support, transparent communication, and having small rituals are important for creating healthy work-life boundaries. Q: What do you do at MongoDB? Ev: I’m a Staff Software Engineer on the Atlas Foundational Services team. In practice, that means that I develop systems, tools, frameworks, processes and provide guidance within our systems architecture to other engineering teams so they can deliver value and make their customers happy! Q: What does work-life balance look like for you? Ev: My team is hybrid and distributed. I enjoy going to our office a couple of times every week (but don’t have to), and all of our team processes are built with remote friendliness in mind, which is very helpful. Occasionally, I go on call for a week, and make sure that my laptop is reachable in case something happens and it needs my attention. On my team, when there’s an on-call shift during a particular day or weekend that is really inconvenient, we are very supportive, and usually someone is able to swap rotations. Q: How do you ensure you set boundaries between work and personal life? Ev: It’s very easy to fall into the trap of never really disconnecting, thinking about or really just working all day when it’s just an open laptop away. As a rule of thumb, I tell myself that I only ever spend time outside of business hours doing anything work-related when I am not asked or expected to do so by anyone. When I do it, it’s because I want to and will likely have some fun! On the other hand, I’m very transparent when it comes to my personal life and responsibilities, as well as any work adjustments that are needed. Transparency is key, and I’m very lucky that all my managers at MongoDB have always been very accommodating. Q: Has work/life balance always been a priority for you, or did you develop it later in your career? Ev: It always was, but I struggled a bit during my first experience working from home in a hybrid model. Over time, I realized that the small rituals I’ve done during the days I commuted to the office, like getting ready in the morning and driving back home after work, were essential for me “flipping the switch” into on and off of work mode. Developing new rituals when I worked from home—like making sure I had breakfast, took care of my pets, or exercising after work—was essential for me to truly disconnect when I close my laptop. Otherwise I would struggle to enjoy my personal time during the evening or would think about work right after waking up in the morning. Q: What benefits has this balance given you in your career? Ev: I feel like both my personal and professional lives benefited from that. On the personal side, it’s really nice to know that my work schedule accommodates me not being a big morning person, and that it can take personal appointments that can overlap with business hours, like language classes (I’m learning Japanese currently!). On the professional side, sometimes I personally find it productive to spend some time during off-hours to research, write experimental code or documents, or just get ready for the next day while everything’s quiet. Q: What advice would you give to someone seeking to find a better balance? Ev: For me, work-life balance means being able to fully dedicate myself to my personal life without affecting success at my job and vice-versa. Most importantly, it is important to make sure that it’s sustainable and not detrimental to your health. On a more practical note, if you have access to work emails or communication channels on your phone, learning how to set up meaningful notifications is critical. If your phone notifies you of anything work-related outside of working hours, it needs to be important and actionable! Thank you to Everton Agner for sharing their insights! And thanks to all of you for reading. For past articles in this series, check out our interviews with: Senior AI Developer Advocate, Apoorva Joshi Developer Advocate Anaiya Raisinghani Senior Partner Marketing Manager Rafa Liou Interested in learning more about or connecting more with MongoDB? Join our MongoDB Community to meet other community members, hear about inspiring topics, and receive the latest MongoDB news and events. And let us know if you have any questions for our future guests when it comes to building a better work-life balance as developers. Tag us on social media: @/mongodb #LoveYourDevelopers #AwayFromTheKeyboard

December 11, 2024

Capgemini e MongoDB insieme per liberare le migliori energie dei clienti

Capgemini supporta le aziende nel loro percorso di trasformazione digitale e di business facendo leva sul potere della tecnologia. In qualità di partner strategico per le aziende di tutto il mondo, da oltre 50 anni la multinazionale fa leva sul digitale per trasformare il business dei clienti. Capgemini fornisce soluzioni innovative per ogni esigenza di business, dalla strategia alla progettazione alle operation, grazie alle competenze in ambito cloud, dati, AI, connettività, software, digital engineering e piattaforme. Con un fatturato di oltre 22 miliardi di euro e 340mila dipendenti, Capgemini opera in 50 Paesi del mondo. In Italia è presente in 20 città. Capgemini I&D è la divisione della multinazionale che si occupa dei dati. Nel corso del 2023 il management di questa importante struttura si mette alla ricerca di una soluzione innovativa ed efficace per costruire nel modo migliore le applicazioni che interagiscono con database NoSQL. La scelta ricade su MongoDB perché è la soluzione più significativa in quel comparto. “Non è l’unica tecnologia disponibile ma è quella più rilevante di tutte”, dice Stefano Sponga, head of go to market and portfolio I&D Italia di Capgemini, “e soprattutto per come è strutturata, per la dinamicità della modellazione del dato e per la possibilità di utilizzare la ricerca vettoriale, si sposa molto bene con le crescenti esigenze legate all’AI Generativa”. Offloading dei DB tradizionali e modernizzazione delle applicazioni: Due è meglio di uno “La soluzione MongoDB”, racconta Sponga, “ci ha permesso di creare, sfruttando l’AI Generativa, soluzioni particolarmente efficaci per il mondo delle assicurazioni, ma stiamo collaborando con MongoDB anche per il mondo del banking. Non nascondo che l’efficacia della partnership non nasce solo dalla tecnologia, ma anche dall’appartenenza al mondo della comunità opensource”. Gli use case più frequenti sono quelli relativi all’offloading dei mainframe e alla modernizzazione delle applicazioni, che portano a diversi vantaggi per i clienti: la riduzione del consumo di Mips (e quindi anche un ritorno in termini di sostenibilità energetica e ambientale), l’efficientamento dei processi e la riduzione dei costi. “MongoDB può essere utilizzato ad esempio come ambiente di storage a basso costo e alta efficienza”, dice Sponga, “su cui re-indirizzare chiamate su canali specifici come le App invece di utilizzare i mainframe. Oppure si possono storicizzare i dati a basso tasso di accesso, consultabili in modo efficace separatamente rispetto ai data warehouse, e integrare facilmente i due mondi”. Capgemini sta anche lavorando, utilizzando la tecnologia MongoDB, ad applicazioni nel segmento Retrieval Augmented Generation (RAG), che permettono di sfruttare la potenza dell’AI Generativa e dei Large Language Models circoscrivendo però le ricerche a un insieme chiuso di documenti (ad esempio i contratti delle assicurazioni). “Per quanto riguarda l’esperienza in Italia, quella che ho potuto sperimentare in prima persona”, prosegue Sponga, “ho notato sia la capillarità dell’organizzazione MongoDB sia la dinamicità delle persone che fanno parte del loro gruppo. C’è sempre stata una grande prontezza di risposta ai nostri quesiti ma la cosa più interessante è l’affinità del modus operandi dei nostri team, che consiste in un approccio strategico e non tattico ai progetti”. Un futuro di sintonia e di crescita congiunta “Gli aspetti sinergici tra il nostro approccio e quello di MongoDB”, conclude Sponga, “innescheranno ulteriori meccanismi di integrazione, abilitati dalla tecnologia. Stiamo pensando, ad esempio, a estensioni della partnership anche ai nostri prodotti per il core banking. Si tratta insomma di una coesione stretta e di una collaborazione strategica, non di un’opportunistica e tattica system integration”. Più in generale, Capgemini è intenzionata a utilizzare la tecnologia MongoDB in tutti quegli use case dove non ci sono forti esigenze di analisi numerica ma di eterogeneità dei dati e di scalabilità. “Non è difficile immaginare applicazioni in ambito energy ad esempio”, conclude Sponga, “oppure anche nel settore space & defence”. MongoDB è il vendor più rilevante nell’ambito NoSQL. Con loro abbiamo una partnership strategica giovane ma caratterizzata già da un legame molto forte, una relazione a cui partecipa anche il top management. Stefano Sponga, head of go to market and portfolio I&D Italia, Capgemini Guarda come realizzare un efficace Mainframe Offloading con MongoDB Atlas .

December 11, 2024

Atlas Stream Processing Now Supports Azure and Azure Private Link

Today, we’re excited to announce that Atlas Stream Processing now supports Microsoft Azure! This update opens new possibilities for developers leveraging Azure’s cloud ecosystem, offering a way to: Seamlessly integrate MongoDB Atlas and Apache Kafka Effortlessly handle complex and rapidly changing data structures Use the familiarity of the MongoDB Query API for processing streaming data Benefit from a fully managed service that eliminates operational overhead Azure support in four regions At launch, we’re supporting four Azure regions spanning both the U.S. and Europe: Azure Region Location US East Virginia, US US East 2 Virginia, US US West California, US West Europe Netherlands We’ll continue adding more regions across cloud providers in the future. Let us know which regions you need next in UserVoice . Atlas Stream Processing simplifies integrating MongoDB with Apache Kafka to build event-driven applications. New to Atlas Stream Processing? Watch our 3-minute explainer . How it works Working with Atlas Stream Processing on Azure will feel just like it does already today when using AWS. During the Stream Processing Instance (SPI) tier selection in the Atlas UI or CLI, simply select Azure as your provider and then choose your desired region. Figure 1: Stream Processing instance setup via Atlas UI $ atlas streams instances create AzureSPI --provider AZURE --region westus --tier SP10 Figure 2: Stream Processing instance setup via the Atlas CLI Secure networking for Azure Event Hubs via Azure Private Link In addition to adding support for Azure in multiple regions, we’re introducing Azure Private Link support for developers using Azure Event Hubs . Event Hubs is Azure’s native, Kafka-compatible data streaming service. As a reminder, Atlas Stream Processing supports any service that uses the Kafka Wire Protocol . That includes Azure Event Hubs, AWS Managed Service for Kafka (MSK), Redpanda, and Confluent Cloud. As we have written before , security is critical for data services, and it’s especially important with stream processing systems where connecting to technologies like Apache Kafka external to a database like MongoDB Atlas, is required. For this reason, we’re engineering Atlas Stream Processing to leverage the advanced networking capabilities available through the major cloud providers (AWS, Azure, and GCP). Networking To better understand the value of support for private link, let’s summarize the three key ways that developers typically connect between data services: Public networking Private networking through VPC peering Private networking through private link Public networking connects services using public IP addresses. It’s the least secure of all approaches. This makes it the easiest to set up, but it's a less secure approach than either VPC peering or private link. Private networking through VPC peering connects services across two virtual private clouds (VPCs). This improves security compared with public networking by keeping traffic off the public internet and is commonly used for testing and development purposes. Private networking through private link is even more secure by enforcing connections to specific endpoints. While VPC peering lets resources from one VPC connect to all of the resources in the other VPC, private link ensures that each specific resource can only connect to defined services with specific associated endpoints. This connection method is important for use cases relying on sensitive data. Figure 3: Private Link allows for connecting to specific endpoints Ready to get started? With support for Azure Private Link, Atlas Stream Processing now makes it simple to implement the most secure method for networking across MongoDB and Kafka on Azure Event Hubs. Login today to get started, or check out our documentation to create your first private link connection.

December 10, 2024