MongoDB Blog

Announcements, updates, news, and more

Reintroducing the Versioned MongoDB Atlas Administration API

Our MongoDB Atlas Administration API has gotten some work done in the last couple of years to become the best “Versioned” of itself. In this blog post, we’ll go over what’s changed and why migrating to the newest version can help you have a seamless experience managing MongoDB Atlas . What does the MongoDB Atlas Administration API do? MongoDB Atlas, MongoDB’s managed developer data platform, contains a range of tools and capabilities that enable developers to build their applications’ data infrastructure with confidence. As application requirements and developer teams grow, MongoDB Atlas users might want to further automate database operation management to scale their application development cycles and enhance the developer experience. The entry point to managing MongoDB Atlas in a more programmatic fashion is the legacy MongoDB Atlas Administration API. This API enables developers to manage their use of MongoDB Atlas at a control plane level. The API and its various endpoints enable developers to interact with different MongoDB Atlas resources—such as clusters, database users, or backups—and lets them perform operational tasks like creating, modifying, and deleting those resources. Additionally, the Atlas Administration API supports the MongoDB Atlas Go SDK , which empowers developers to seamlessly interact with the full range of MongoDB Atlas features and capabilities using the Go programming language. Why should I migrate to the Versioned Atlas Administration API? While it serves the same purpose as the legacy version, the new Versioned Atlas Administration API gives a significantly better overall experience in accessing MongoDB Atlas programmatically. Here’s what you can expect when you move over to the versioned API. A better developer experience The Versioned Atlas Administration API provides a predictable and consistent experience with API changes and gives better visibility into new features and changes via the Atlas Administration API changelog . This means that breaking changes that can impact your code will only be introduced in a new resource version and will not affect the production code running the current, stable version. Also, every time a new version two resource is added, you will be notified of the older version being deprecated, giving you at least one year to upgrade before the removal of the previous resource version. As an added benefit, the Versioned Atlas Administration API supports Service Accounts as a new way to authenticate to MongoDB Atlas using the industry standard OAuth2.0 protocol with the Client Credentials flow. Minimal workflow disruptions With resource-level versioning, the Versioned Atlas Administration API provides specific resource versions, which are represented by dates. When migrating from the legacy, unversioned MongoDB Atlas Administration API (/v1) to the new Versioned Atlas Administration API (/v2), the API will default to resource version 2023-02-01. To simplify the initial migration, this resource version applies uniformly to all API resources (e.g., /backup or /clusters). This helps ensure that migrations do not adversely affect current MongoDB Atlas Administration API–based workloads. In the future, each resource can adopt a new version independently (e.g., /cluster might update to 2026-01-01 while /backup remains on 2023-02-01). This flexibility ensures you only need to act when a resource you use is deprecated. Improved context and visibility Our updated documentation provides detailed guidance on the versioning process. All changes—including the release of new endpoints, the deprecation of resource versions, or nonbreaking updates to #stable resources—are now tracked in a dedicated, automatically updated changelog. Additionally, the API specification offers enhanced visibility and context for all stable and deprecated resource versions, ensuring you can easily access documentation relevant to your specific use case. Why should I migrate to the new Go SDK? In addition to an updated API experience, we’ve introduced version 2 of the MongoDB Atlas Go SDK for the MongoDB Atlas Administration API. This version supports a range of capabilities that streamline your experience when using the Versioned Atlas Administration API: Full endpoint coverage: MongoDB Atlas Go SDK version 2 enables you to access all the features and capabilities that the versioned API offers today with full endpoint coverage so that you can programmatically use MongoDB Atlas in full. Flexibility: When interacting with the new versioned API through the new Go SDK you can choose which version of the MongoDB Administration API you want to work with, giving you control over when breaking changes impact you. Ease of use: The new Go SDK enables you to simplify getting started with the MongoDB Atlas Administration API. You’ll be able to work with fewer lines of code and prebuilt functions, structs, and methods that encapsulate the complexity of HTTP requests, authentication, error handling, versioning, and other low-level details. Immediate access to updates: When using the new Go SDK, you can immediately access any newly released API capabilities. Every time a new version of MongoDB Atlas is released, the SDK will be quickly updated and continuously maintained, ensuring compatibility with any changes in the API and speeding up your development process. How can I experience the enhanced version? To get started using the Versioned Atlas Administration API, you can visit the migration guide , which outlines how you can transition over from the legacy version. To learn more about the MongoDB Atlas Administration API, you can visit our documentation page .

February 12, 2025
Updates

Building Gen AI with MongoDB & AI Partners | January 2025

Even for those of us who work in technology, it can be hard to keep track of the awards companies give and receive throughout the year. For example, in the past few months MongoDB has announced both our own awards (such as the William Zola Award for Community Excellence ) and awards the company has received—like the AWS Technology Partner of the Year NAMER and two awards from RepVue. And that’s just us! It can be a lot! But as hard as they can be to follow, industry awards—and the recognition, thanks, and collaboration they represent—are important. They highlight the power and importance of working together and show how companies like MongoDB and partners are committed to building best-in-class solutions for customers. So without further ado, I’m pleased to announce that MongoDB has been named Technology Partner of the Year in Confluent’s 2025 Global Partner Awards ! As a member of the MongoDB AI Applications Program (MAAP) ecosystem, Confluent enables businesses to build a trusted, real-time data foundation for generative AI applications through seamless integration with MongoDB and Atlas Vector Search. Above all, this award is a testament to MongoDB and Confluent’s shared vision: to help enterprises unlock the full potential of real-time data and AI. Here’s to what’s next! Welcoming new AI and tech partners It's been an action-packed start to the year: in January 2025, we welcomed six new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Base64 Base64 is an all-in-one solution to bring AI into document-based workflows, enabling complex document processing, workflow automation, AI agents, and data intelligence. “MongoDB provides a fantastic platform for storing and querying all kinds of data, but getting unstructured information like documents into a structured format can be a real challenge. That's where Base64 comes in. We're the perfect onramp, using AI to quickly and accurately extract the key data from documents and feed it right into MongoDB,” said Chris Huff, CEO of Base64. “ This partnership makes it easier than ever for businesses to unlock the value hidden in their documents and leverage the full power of MongoDB." Dataloop Dataloop is a platform that allows developers to build and orchestrate unstructured data pipelines and develop AI solutions faster. " We’re thrilled to join forces with MongoDB to empower companies in building multimodal AI agents”, said Nir Buschi, CBO and co-founder of Dataloop. “Our collaboration enables AI developers to combine Dataloop’s data-centric AI orchestration with MongoDB’s scalable database. Enterprises can seamlessly manage and process unstructured data, enabling smarter and faster deployment of AI agents. This partnership accelerates time to market and helps companies get real value to customers faster." Maxim AI Maxim AI is an end-to-end AI simulation and evaluation platform, helping teams ship their AI agents reliably and more than 5x faster. “ We're excited to collaborate with MongoDB to empower developers in building reliable, scalable AI agents faster than ever,” said Vaibhavi Gangwar, CEO of Maxim AI. “By combining MongoDB’s robust vector database capabilities with Maxim’s comprehensive GenAI simulation, evaluation, and observability suite, this partnership enables teams to create high-performing retrieval-augmented generation (RAG) applications and deliver outstanding value to their customers.” Mirror Security Mirror Security offers a comprehensive AI security platform that provides advanced threat detection, security policy management, continuous monitoring ensuring compliance and protection for enterprises. “ We're excited to partner with MongoDB to redefine security standards for enterprise AI deployment,” said Dr. Aditya Narayana, Chief Research Officer, at Mirror Security. “By combining MongoDB's scalable infrastructure with Mirror Security's end-to-end vector encryption, we're making it simple for organizations to launch secure RAG pipelines and trusted AI agents. Our collaboration eliminates security-performance trade-offs, empowering enterprises in regulated industries to confidently accelerate their AI initiatives while maintaining the highest security standards.” Squid AI Squid AI is a full-featured platform for creating private AI agents in a faster, secure, and automated way. “As an AI agent platform that securely connects to MongoDB in minutes, we're looking forward to helping MongoDB customers reveal insights, take action on their data, and build enterprise AI agents,” said Leslie Lee, Head of Product at Squid AI. “ By pairing Squid's semantic RAG and AI functions with MongoDB's exceptional performance , developers can build powerful AI agents that respond to new inputs in real-time.” TrojAI TrojAI is an AI security platform that protects AI models and applications from new and evolving threats before they impact businesses. “ TrojAI is thrilled to join forces with MongoDB to help companies secure their RAG-based AI apps built on MongoDB,” said Lee Weiner, CEO of TrojAI. “We know how important MongoDB is to helping enterprises adopt and harness AI. Our collaboration enables enterprises to add a layer of security to their database initialization and RAG workflows to help protect against the evolving GenAI threat landscape.” But what, there’s more! In February, we’ve got two webinars coming up with MAAP partners that you don’t want to miss: Build a JavaScript AI Agent With MongoDB and LangGraph.js : Join MongoDB Staff Developer Advocate Jesse Hall and LangChain Founding Software Engineer Jacob Lee for an exclusive webinar that highlights the integration of LangGraph.js, LangChain’s cutting-edge JavaScript library, and MongoDB - live on Feb 25 . Architecting the Future: RAG and Al Agents for Enterprise Transformation : Join MongoDB, LlamaIndex, and Together AI to explore how to strategically build a tech stack that supports the development of enterprise-grade RAG and AI agentic systems, explore technical foundations and practical applications, and learn how the MongoDB Applications Program (MAAP) will enable you to rapidly innovate with AI - content on demand . To learn more about building AI-powered apps with MongoDB, check out our AI Learning Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

February 11, 2025
Artificial Intelligence

MongoDB Empowers ISVs to Drive SaaS Innovation in India

Independent Software Vendors (ISVs) play a pivotal role in the Indian economy. Indeed, the Indian software market is expected to experience an annual growth rate of 10.40%, resulting in a market volume of $15.89bn by 2029. 1 By developing specialized software solutions and digital products that can be bought 'off the shelf', ISVs empower Indian organizations to innovate, improve efficiency, and remain competitive. Many established enterprises in India choose a 'buy' rather than 'build' strategy when it comes to creating modern software applications. This is particularly true when it comes to cutting-edge AI use cases. MongoDB works closely with Indian ISVs across industries, providing them with a multi-cloud data platform and highly flexible, scalable technologies to build operational and efficient software solutions. For example, Intellect AI , a business unit of Intellect Design Arena, has used MongoDB Atlas to drive a number of innovative use cases in the banking, financial services, and insurance industries. Intellect AI chose MongoDB for its flexibility coupled with its ability to meet complex enterprise requirements such as scale, resilience, and security compliance. And Ambee, a climate tech startup, is using MongoDB Atlas ’ flexible document model to support its AI and ML models. Here are three more examples of ISV customers who are enabling, powering, and growing their SaaS solutions with MongoDB Atlas. MongoDB enhancing Contentstack's content delivery capabilities Contentstack is a leading provider of composable digital experience solutions, and specializes in headless content management systems (CMS). Headless CMS is a backend-only web content management system that acts primarily as a content repository. “Our headless CMS allows our customers to bring all forms of content to the table, and we host the content for them,” said Suryanarayanan Ramamurthy, Head of Data Science at Contentstack, while speaking at MongoDB.local 2024 . A great challenge in the CMS industry is the ability to provide customers with content that remains factually correct, brand-aligned, and tailored to the customer’s identity. Contentstack created an innovative, AI-based product— Brand Kit —that does exactly that, built on MongoDB Atlas. “Our product Brand Kit, which launched in June 2024, overcomes factual incorrectness. The AI capabilities the platform offers help our customers create customized and context-specific content that meets their brand guidelines and needs,” said Ramamurthy. MongoDB Atlas Vector Search enables Contentstack to transform content and bring contextual relevance to retrievals. This helps reduce errors caused by large language model hallucinations, allowing the retrieval-augmented generation (RAG) application to deliver better results to users. AppViewX: unlocking scale for a growing cybersecurity SaaS pioneer AppViewX delivers a platform for organizations to manage a range of cybersecurity capabilities, such as certificate lifecycle management and public key infrastructure. The company ensures end-to-end security compliance and data integrity for large enterprises across industries like banking, healthcare, and automotive. Speaking at MongoDB.local Bengaluru in 2024, Karthik Kannan, Vice President of Product Management at AppViewX, explained how AppViewX transitioned from an on-premise product to a SaaS platform in 2021. MongoDB Atlas powered this transition. MongoDB Atlas's unique flexibility, scalability, and multi-cloud capabilities enabled AppViewX to easily manage fast-growing data sets, authentication, and encryption from its customers’ endpoints, device identities, workload identities, user identities, and more. Furthermore, MongoDB provides AppViewX with robust security guaranteeing critical data protection, and compliance. “We've been really able to grow fast and at scale across different regions, gaining market share,” said Kannan. “Our engineering team loves MongoDB,” added Kannan. “The support that we get from MongoDB allowed us to get into different regions, penetrate new markets to grow at scale, so this is a really important partnership that helped us get to where we are.” Zluri Streamlines SaaS Management with MongoDB Zluri provides a unified SaaS management platform that helps IT and security teams manage applications across the organization. The platform provides detailed insights into application usage, license optimization, security risks, and cost savings opportunities. Zluri processes massive volumes of unstructured data—around 9 petabytes per month—from over 800 native integrations with platforms like single sign-on, human resources management systems, and Google Workspace. One of its challenges was to automate the discovery and data analysis across those platforms, as opposed to employing an exhaustive time and labour intensive manual approach. MongoDB Atlas has allowed Zluri to ingest, normalize, process, and manage the high volume and complexity of data seamlessly across diverse sources. “We wanted to connect with every single system that's currently available, get all that data, process all that data so that the system works on autopilot mode, so that you're not manually adding all that information,” said Chaithaniya Yambari, Zluri’s Co-Founder and Chief Technology Officer, when speaking at MongoDB.local Bengaluru in 2024 . As a fully managed database, MongoDB Atlas platform allows Zluri to eliminate maintenance overhead, so its team of engineers and developers can focus on innovation. Zluri also utilizes MongoDB Atlas Search to perform real-time queries, filtering, and ranking of metadata. This eliminates the challenges of synchronizing separate search solutions with the database, ensuring IT managers get fast, accurate, and up-to-date results. These are just a few examples of how MongoDB’s is working with ISVs to shape the future of India’s digital economy. As technology continues to evolve, the role of ISVs in fostering innovation and economic growth will become ever more integral. MongoDB is committed to providing ISVs with a robust, flexible, and scalable database that removes barriers to growth and the ability to innovate. Visit our product page to learn more about MongoDB Atlas. Learn more about MongoDB Atlas Search on our product details page. Check out our Quick Start Guide to get started with MongoDB Atlas Vector Search today.

February 11, 2025
Applied

Simplify Security At Scale with Resource Policies in MongoDB Atlas

Innovation is the gift that keeps on giving: industries that are more innovative have higher returns, and more innovative industries see higher rates of long-term growth 1 . No wonder organizations everywhere strive to innovate. But in the pursuit of innovation, organizations can struggle to balance the need for speed and agility with critical security and compliance requirements. Specifically, software developers need the freedom to rapidly provision resources and build applications. But manual approval processes, inconsistent configurations, and security errors can slow progress and create unnecessary risks. Friction that slows down employees and leads to insecure behavior is a significant driver of insider risk. Paul Furtado Vice President, Analyst, Gartner Enter resource policies , which are now available in public preview in MongoDB Atlas. This new feature balances rapid innovation with robust security and compliance. Resource policies allow organizations to enable developers with self-service access to Atlas resources while maintaining security through automated, organization-wide ‘guardrails’. What are resource policies? Resource policies help organizations enforce security and compliance standards across their entire Atlas environment. These policies act as guardrails by creating organization-wide rules that control how Atlas can be configured. Instead of targeting specific user groups, resource policies apply to all users in an organization, and focus on governing a particular resource. Consider this example: An organization subject to General Data Protection Regulation (GDPR) 2 requirements needs to ensure that all of their Atlas clusters run only on approved cloud providers in regions that comply with data residency and privacy regulations. Without resource policies, developers may inadvertently deploy clusters on any cloud provider. This risks non-compliance and potential fines of up to 20 million euros or 4% of global annual turnover according to article 83 of the GDPR. But, by using resource policies, the organization can mandate which cloud providers are permitted, ensuring that data resides only in approved environments. The policy is automatically applied to every project in the organization, preventing the creation of clusters on unauthorized cloud platforms. Thus compliance with GDPR is maintained. The following resource policies are now in public preview: Restrict cloud provider: Limit Atlas clusters to approved cloud providers (AWS, Azure, Google Cloud). Restrict cloud region: Restrict cluster deployments in approved cloud providers to specific regions. Block wildcard IP: Reduce security risk by disabling the use of 0.0.0.0/0 (or “wildcard”) IP address for cluster access. How resource policies enable secure self-service Atlas access Resource policies address the challenges organizations face when trying to balance developer agility with robust security and compliance. Without standardized controls, there is a risk that developers will configure Atlas clusters to deviate from corporate or external requirements. This invites security vulnerabilities and compliance gaps. Manual approval and provisioning processes for every new project creates delays. Concurrently, platform teams struggle to enforce consistent standards across an organization, increasing operational complexity and costs. With resource policies, security and compliance standards are automatically enforced across all Atlas projects. This eliminates manual approvals and reduces the risk of misconfigurations. Organizations can deliver self-service access to Atlas resources for their developers. This allows them to focus on building applications instead of navigating complex internal review and compliance processes. Meanwhile, platform teams can manage policies centrally. This ensures consistent configurations across the organization and frees time for strategic initiatives. The result is a robust security posture, accelerated innovation, and greater efficiency. Automated guardrails prevent unauthorized configurations. Concurrently, centralized policy management streamlines operations and ensures alignment with corporate and external standards. Resource policies enable organizations to scale securely and innovate without compromise. This empowers developers to move quickly while simplifying governance. Creating resource policies Atlas resource policies are defined using the open-source Cedar policy language , which combines expressiveness with simplicity. Cedar’s concise syntax makes writing and understanding policies easy, streamlining policy creation and management. Resource policies can be created and managed programmatically through infrastructure-as-code tools like Terraform or CloudFormation, or by integrating directly using the Atlas Admin API. To explore what constructing a resource policy looks like in practice, let’s return to our earlier example. This is an organization subject to GDPR requirements that wants to ensure all of their Atlas clusters run on approved cloud providers only. To prevent users from creating clusters on Google Cloud (GCP), the organization could write the following policy named “ Policy Preventing GCP Clusters .” This policy forbids creating or editing a cluster when the cloud provider is Google Cloud. The body defines the behavior of the policy in the human and machine-readable Cedar language. If required, ‘ gcp ’ could be replaced with ‘ aws ’. Figure 1. Example resource policy preventing the creation of Atlas clusters on GCP. Alternatively, the policy could allow users to create clusters only on Google Cloud with the following policy named “Policy Allowing Only GCP Clusters”. This policy uses the Cedar clause “unless” to restrict creating or editing a cluster unless it is on GCP. Figure 2. Example resource policy that restricts cluster creation to GCP only. Policies can also have compound elements. For example, an organization can create a project-specific policy that only enforces the creation of clusters in GCP for the Project with ID 6217f7fff7957854e2d09179 . Figure 3. Example resource policy that restricts cluster creation to GCP only for a specific project. And, as shown in Figure 4, another policy might restrict cluster deployments on GCP as well as on two unapproved AWS regions: US-EAST-1 and US-WEST-1. Figure 4. Example resource policy restricting cluster deployments on GCP as well as AWS regions US-EAST-1 and US-WEST-1. Getting started with resource policies Resource policies are available now in MongoDB Atlas in public preview. Get started creating and managing resource policies programmatically using infrastructure-as-code tools like Terraform or CloudFormation. Alternatively, integrate directly with the Atlas Admin API. Support for managing resource policies in the Atlas user interface is expected by mid-2025. Use the resources below to learn more about resource policies. Feature documentation Postman Collection Atlas Administration API documentation Terraform Provider documentation AWS CDK AWS Cloud Formation documentation 1 McKinsey & Company , August 2024 2 gdpr.eu

February 10, 2025
Updates

Dynamic Workloads, Predictable Costs: The MongoDB Atlas Flex Tier

MongoDB is excited to announce the launch of the Atlas Flex tier . This new offering is designed to help developers and teams navigate the complexities of variable workloads while growing their apps. Modern development environments demand database solutions that can dynamically scale without surprise costs, and the Atlas Flex tier is an ideal option offering elasticity and predictable pricing. Previously, developers could either pick the predictable pricing of a shared tier cluster or the elasticity of a serverless instance. Atlas Flex tier combines the best features of the Shared and Serverless tiers and replaces them, providing an easier choice for developers. This enables teams to focus on innovation rather than database management. This new tier underscores MongoDB’s commitment to empowering developers through an intuitive and customer-friendly platform. It simplifies cluster provisioning on MongoDB Atlas , providing a unified, simple path from idea to production. With the ever-increasing complexity of application development, it’s imperative that a database evolve alongside the project it supports. Whether prototyping a new app or managing dynamic production environments, MongoDB Atlas provides comprehensive support. And, by seamlessly combining scalability and affordability, the Atlas Flex tier reduces friction as requirements expand. Bridging the gap between flexibility and predictability: What the Atlas Flex tier offers developers Database solutions that can adapt to fluctuating workloads without incurring unexpected costs are becoming a must-have for every organization. While traditional serverless models offer flexibility, they can result in unpredictable expenses due to unoptimized queries or unanticipated traffic surges . The Atlas Flex tier bridges this gap and empowers developers with: Flexibility: 100 ops/sec and 5 GB of storage are included by default, as is dynamic scaling of up to 500 ops/sec. Predictable pricing: Customers will be billed an $8 base fee and additional fees based on usage. And pricing is capped at $30 per month. This prevents runaway costs—a persistent challenge with serverless architectures. Data services: Customers can access various features such as MongoDB Atlas Search , MongoDB Atlas Vector Search , Change Streams , MongoDB Atlas Triggers , and more. This delivers a comprehensive solution for development and test environments. Seamless migration: Atlas Flex tier customers can transition to dedicated clusters when needed via the MongoDB Atlas UI or using the Admin API. The Atlas Flex tier marks a significant step forward in streamlining database management and enhancing its adaptability to the needs of modern software development. The Atlas Flex tier provides unmatched flexibility and reliability for managing high-variance traffic and testing new features. Building a unified on-ramp: From exploration to production MongoDB Atlas enables a seamless progression for developers at every stage of application development. With three distinct tiers—Free, Flex, and Dedicated—MongoDB Atlas encourages developers to explore, build, and scale their applications: Atlas Free tier: Perfect for experimenting with MongoDB and building small applications at no initial cost, this tier remains free forever. Atlas Flex tier: Bridging the gap between exploration and production, this tier offers scalable, cost-predictable solutions for growing workloads. Atlas Dedicated tier: Designed for high-performance, production-ready applications with built-in automated performance optimization, this tier lets you scale applications confidently with MongoDB Atlas’s robust observability, security, and management capabilities. Figure 1.   An overview of the Free, Flex, and Dedicated tiers This tiered approach gives developers a unified platform for their entire journey. It ensures smooth transitions as projects evolve from prototypes to enterprise-grade applications. At MongoDB, our focus has always been on removing obstacles for innovators, and this simple scaling path empowers developers to focus on innovation rather than navigating infrastructure challenges. Supporting startups with unpredictable traffic When startups launch applications with uncertain user adoption rates, they often face scalability and cost challenges. But the Atlas Flex tier addresses these issues! For example, startups can begin building apps with minimal upfront costs. The Atlas Flex tier enables them to scale effortlessly to accommodate traffic spikes, with support for up to 500 operations per second whenever required. And as user activity stabilizes and grows, migrating to dedicated clusters is a breeze. MongoDB Atlas removes the stress of managing infrastructure. It enables startups to focus on building exceptional user experiences and achieving product-market fit. Accelerating MVPs for gen AI applications The Atlas Flex tier is particularly suitable for minimum viable products in generative AI applications. Indeed, those incorporating vector search capabilities are perfect use cases. For example, imagine a small research team specializing in AI. It has developed a prototype that employs MongoDB Atlas Vector Search for the management of embeddings in the domain of natural language processing. The initial workloads remain under 100 ops/sec. As such, the overhead costs $8 per month. As the model is subjected to comprehensive testing and as demand for queries increases, the application can be seamlessly scaled while performance is uninterrupted. Given the top-end cap of $30 per month, developers can refine the application without concerns for infrastructure scalability or unforeseen expenses. The table below shows how monthly Atlas Flex tier pricing breaks down by capacity. Understanding the costs: The Atlas Flex tier’s pricing breakdown. The monthly fee for each level of usage is prorated and billed on an hourly basis. All clusters on MongoDB Atlas, including Atlas Flex tier clusters, are pay-as-you-go. Clusters are only charged for as long as they remain active. For example, a workload that requires 100 ops/sec for 20 days, 250 ops/sec for 5 days, and 500 ops/sec for 5 days would cost approximately $13.67. If the cluster was deleted after the first 20 days of usage, the cost would be approximately $5.28. This straightforward and transparent pricing model ensures developers can plan budgets with confidence while accessing world-class database capabilities. Get started today The Atlas Flex tier revolutionizes database management. It caters to projects at all stages—from prototypes to production. Additionally, it delivers cost stability, enhanced scalability, and access to MongoDB’s robust developer tools in a single seamless solution. With Atlas Flex tier, developers gain the freedom to innovate without constraints, confident that their database can handle any demand their applications generate. Whether testing groundbreaking ideas or scaling for a product launch, this tier provides comprehensive support. Learn more or get started with Atlas Flex tier today to elevate application development to the next level.

February 6, 2025
Updates

Automate Network Management Using Gen AI Ops with MongoDB

Imagine that it’s a typical Tuesday afternoon and that you’re the operations manager for a major North American telecommunications company. Suddenly, your Network Operations Center (NOC) receives an alert that web traffic in Toronto has surged by hundreds of percentage points over the last hour—far above its usual baseline. At nearly the same moment, a major Toronto-based client complains that their video streams have been buffering nonstop. Just a few years ago, a scenario like this would trigger a frantic scramble: teams digging into logs, manually writing queries, and attempting to correlate thousands of lines of data in different formats to find a single root cause. Today, there’s a more streamlined, AI-driven approach. By combining MongoDB’s developer data platform with large language models (LLMs) and a retrieval-augmented generation (RAG) architecture, you can move from reactive “firefighting” to proactive, data-informed diagnostics. Instead of juggling multiple monitoring dashboards or writing complicated queries by hand, you can simply ask for insights—and the system retrieves and analyzes the necessary data automatically. Facing the unexpected traffic spike Now let’s imagine the same situation, but this time with AI-assisted network management. Shortly after you spot a traffic surge in Toronto, your NOC chatbot pings you with a situation report: requests from one neighborhood are skyrocketing, and an unusually high percentage involve video streaming paths or caching servers. Under the hood, MongoDB automatically ingests every log entry and telemetry event in real time—capturing IP addresses, geographic data, request paths, timestamps, router logs, and sensor data. Meanwhile, textual content (such as error messages, user complaints, and chat transcripts) is vectorized and stored in MongoDB for semantic search. This setup enables near-instant access to relevant information whenever a keyword like “buffering,” “video streams,” or “streaming lag” is mentioned, ensuring a fast, end-to-end diagnosis. Refer to this article to learn more about semantic search. Zeroing in on the root cause Instead of rummaging through separate logging tools, you pose a simple natural-language question to the system: “What might be causing the client’s video stream buffering problem in Toronto?” The LLM responds by generating a custom MongoDB Aggregation Pipeline —written in Python code—tailored to your query. It might look something like this: a $match stage to filter for the last twenty-four hours of data in Toronto, a $group stage to roll up metrics by streaming services, and a $sort stage to find the largest error counts. The code is automatically served back to you, and with a quick confirmation, you execute it on your MongoDB cluster. A moment later, the chatbot returns with a summarized explanation that points to an overloaded local CDN node, along with higher-than-expected requests from older routers known to misbehave under peak load. Next, you ask the system to explain the core issue in simpler terms so you can share it with a business stakeholder. The LLM takes the numeric results from the Aggregation Pipeline, merges them with textual logs that mention “firmware out-of-date,” and then outputs a cohesive explanation. It even suggests that many of these older routers are still running last year’s firmware release—a known contributor to buffering issues on video streams during traffic spikes. How retrieval-augmented generation (RAG) helps The power behind this effortless insight is a RAG architecture, which marries semantic search with generative text responses. First, the LLM uses vector search in MongoDB to retrieve only those log entries, complaint records, and knowledge base articles that directly relate to streaming. Once it has these key data chunks, the LLM can generate—and continually refine—its analysis. Figure 1. Network chatbot architecture with MongoDB. When the system references historical data to confirm that “similar spikes occurred during the playoffs last year” or that “users with older firmware frequently complain about buffering,” it’s not blindly guessing. Instead, it’s accessing domain-specific logs, user feedback, and diagnostic documents stored in MongoDB, and then weaving them together into a coherent explanation. This eliminates guesswork and slashes the time your team would otherwise spend on low-level data cleanup, correlation, and interpretation. Executing automated remediation Armed with these insights, your team can roll out a targeted fix, possibly involving an auto-update to the affected routers or load-balancing traffic to alternative CDN endpoints. MongoDB’s Change Streams can monitor for future anomalies. If a traffic spike starts to look suspiciously similar to the scenario you just solved, the system can raise a proactive alert or even initiate the fix automatically. Refer to the official documentation to learn more about the change streams. Meanwhile, the cost savings add up. You no longer need engineers manually piecing data together, nor do you endure prolonged user dissatisfaction while you try to figure out what’s happening. Everything from anomaly detection to root-cause analysis and recommended mitigation steps is fed through a single pipeline—visible and explainable in plain language. A future of AI-driven operations This scenario highlights how (gen) AI Ops and MongoDB complement each other to transform network management: Schema flexibility: MongoDB’s document-based model effortlessly stores logs, performance metrics, and user feedback in a single, consistent environment. Real-time performance: With horizontal scaling, you can ingest the massive volumes of data generated by network logs and user requests at any hour of the day. Vector search integration: By embedding textual data (such as logs, user complaints, or FAQs) and storing those vectors in MongoDB, you enable instant retrieval of semantically relevant content—making it easy for an LLM to find exactly what it needs. Aggregation + LLM: An LLM can auto-generate MongoDB Aggregation Pipelines to sift through numeric data with ease, while a second pass to the LLM composes a final summary that merges both numeric and textual analysis. Once you see how much time and effort this end-to-end workflow saves, you can extend it across the entire organization. Whether it’s analyzing sudden traffic spikes in specific geographies, diagnosing a security event, or handling peak online shopping loads during a holiday sale, the concept remains the same: empower people to ask natural-language questions about complex data, rely on AI to craft the specialized queries behind the scenes, and store it all in a platform that can handle unbounded complexity. Ready to embrace gen AI ops with MongoDB? Network disruptions will never fully disappear, but how quickly and intelligently you respond can be a game-changer. By uniting MongoDB with LLM-based AI and a retrieval-augmented generation (RAG) strategy, you transform your network operations from a tangle of logs and dashboards into a conversational, automated, and deeply informed system. Sign up for MongoDB Atlas to start building your own RAG-based workflows. With intelligent vector search, automated pipeline generation, and natural-language insight, you’ll be ready to tackle everything from video streams buffering complaints to the next unexpected traffic surge—before users realize there’s a problem. If you would like to learn more about how to build gen AI applications with MongoDB, visit the following resources: Learn more about MongoDB capabilities for artificial intelligence on our product page. Get started with MongoDB Vector Search by visiting our product page. Blog: Leveraging an Operational Data Layer for Telco Success Want to learn more about why MongoDB is the best choice for supporting modern AI applications? Check out our on-demand webinar, “ Comparing PostgreSQL vs. MongoDB: Which is Better for AI Workloads? ” presented by MongoDB Field CTO, Rick Houlihan.

February 5, 2025
Artificial Intelligence

Official Django MongoDB Backend Now Available in Public Preview

We are pleased to announce that the Official Django MongoDB Backend Public Preview is now available. This Python package makes it easier than ever to combine the sensible defaults and fast development speed Django provides with the convenience and ease of MongoDB. Building for the Python community For years, Django has been consistently rated one of the most popular web frameworks in the Python ecosystem. It’s a powerful tool for building web applications quickly and securely, and implements best practices by default while abstracting away complexity. Over the last few years, Django developers have increasingly used MongoDB, presenting an opportunity for an official MongoDB-built Python package to make integrating both technologies as painless as possible. We recognize that success in this endeavor requires more than just technical expertise in database systems—it demands a deep understanding of Django's ecosystem, conventions, and the needs of its developer community. So we’re committed to ensuring that the Official Django MongoDB Backend not only meets the technical requirements of developers, but also feels painless and intuitive, and is a natural complement to the base Django framework. What’s in the Official Django MongoDB Backend In this public preview release, the Official Django MongoDB Backend offers developers the following capabilities: The ability to use Django models with confidence . Developers can use Django models to represent MongoDB documents, with support for Django forms, validations, and authentication. Django admin support . The package allows users to fire up the Django admin page as they normally would, with full support for migrations and database schema history. Native connecting from settings.py . Just as with any other database provider, developers can customize the database engine in settings.py to get MongoDB up and running. MongoDB-specific querying optimizations . Field lookups have been replaced with aggregation calls (aggregation stages and aggregate operators), JOIN operations are represented through $lookup, and it’s possible to build indexes right from Python. Limited advanced functionality . While still in development, the package already has support for time series, projections, and XOR operations. Aggregation pipeline support . Raw querying allows aggregation pipeline operators. Since aggregation is a superset of what traditional MongoDB Query API methods provide, it gives developers more functionality. And this is just the start—more functionality (including BSON data type support and embedded document support in arrays) is on its way. Stay tuned for the General Availability release later in 2025! Benefits of using the Official Django MongoDB Backend While during the public preview MongoDB requires more work to set up in the initial stages of development than Django’s defaults, the payoff that comes from the flexibility of the document model and the full feature set of Atlas makes that tradeoff worth it over the whole lifecycle of a project. With the Official Django MongoDB Backend, developers can architect applications in a distinct and novel way, denormalizing their data and creating Django models so that data that is accessed together is stored together. These models are both easier to maintain and their retrieval is more performant for a number of use cases—which when paired with the robust, native Django experience MongoDB is creating is a compelling offering, improving the developer experience and accelerating software development. At its core, the MongoDB document model aligns well with Django's mission to “encourage rapid development and clean, pragmatic design.” The MongoDB document model naturally mirrors how developers think about and structure their data in code, allowing for a seamless context switch between a Django model and a MongoDB document. For many modern applications— especially those dealing with hierarchical, semi-structured, or rapidly evolving data structures— the document model provides a more natural and flexible solution than traditional relational databases. Dovetailing with this advantage is the fact it’s simpler than ever to develop locally with MongoDB, thanks to how painless it is to create a local Atlas deployment with Docker. With sensible preconfigured defaults, it’s possible to create a single-node replica set simply by pulling the Docker image and running it, using only an Atlas connection string, and no extra steps needed. The best part? It’s even possible to convert an existing Atlas implementation running in Docker Compose to a local image. Developing with Django and MongoDB just works with the Atlas CLI and Docker. How to get started with the Official Django MongoDB Backend To get started, it’s as easy as running pip install django-mongodb-backend . MongoDB has even created an easy-to-use starter template that works with the django-admin command startproject , making it a snap to see what typical MongoDB migrations look like in Django. For more information, check out our quickstart guide . Interested in giving the package a try for yourself? Please try our quickstart guide and consult our comprehensive documentation . To see the raw code behind the package and follow along with development, check out the repository . For an in-depth look into some of the thinking behind major package architecture decisions, please read this blog post by Jib Adegunloye. Questions? Feedback? Please post on our community forums or through UserVoice . We value your input as we continue to work to build a compelling offering for the Django community.

February 3, 2025
Updates

Improving MongoDB Queries by Simplifying Boolean Expressions

Key takeaways Document databases encourage storing data in fewer collections with many fields, in contrast to relational databases’ normalization principle. This approach improves efficiency but requires careful handling of complex filters. Simplifying Boolean expressions can improve query performance by reducing computational overhead and enabling better plan generation. The solution uses a modified Quine–McCluskey algorithm and Petrick’s method on an efficient bitset representation of Boolean expressions. MongoDB's culture supports innovation by empowering engineers to tackle problems and solve them from beginning to end. Introduction Document databases like MongoDB encourage a strategy that involves storing data in fewer collections with a large number of fields. This is in contrast to the normalization principle of relational databases, which recommends spreading data across numerous tables with a smaller number of fields. Denormalization and avoiding complex joins is a source of efficiency for document databases. However, as the filters become complex, you must take care to handle them properly. Poorly handled complex filters can negatively affect database performance, resulting in slower query responses and higher resource usage. Addressing complex filters becomes a critical task. One optimization technique to mitigate the performance issues with complex filters is Boolean expression simplification. This involves reducing complex Boolean expressions into simpler forms, which the query engine can evaluate more efficiently. As a result, the database can execute queries faster and with less computational overhead. To demonstrate the importance of filter simplification, consider a real MongoDB customer case. The query in the case was enormous in size and clearly machine-generated, which the optimizer couldn't handle efficiently. It was clear that simplifying the query would help the optimizer find a better plan. Here's a smaller example inspired by that case: simplifying filters can lead to more efficient query plans. db.collection.createIndex({b: 1}) filter = {$or: [{$and: [{a: 1}, {a: {$ne: 1}}]}, {b: 2}]} db.collection.find(filter) The query involves predicates on the field “a” so the optimizer cannot generate an Index Scan plan for the unoptimized version of the query and opts for a collection scan plan. However, the simplified expression: {b: 2} makes the IndexScan plan possible. The difference between the plans can be drastic for large collections and selective indexes. In our benchmark, the simplifier showcased a remarkable 18,100% throughput improvement in this scenario. The benefits of the filter simplification can come in different flavors: in one of our benchmarks testing large queries, execution time was cut in half in a collection scan plan. This improvement is due to the faster execution of the simplified queries. Simplifying Boolean expressions: Modified Quine–McCluskey algorithm and Petrick’s method The journey to find a good solution for Boolean simplification was surprisingly complex. The final solution can be expressed in just four steps: Convert the simplifying expression into bitset DNF form Apply QMC reduction operation of DNF terms: (x ∧ y) ∨ (x ∧ ¬y) = x Apply Absorption law: x ∨ (x ∧ y) = x Use Petrick’s method for further simplification Yet we faced a few challenges along the way. This section will explore the challenges and how we addressed them. Simplifying AST The initial solution, applying Boolean laws directly to filters in the Abstract Syntax Tree (AST) format, which MongoDB's query engine uses, proved to be resource-intensive, consuming significant memory and CPU time. This outlines the first challenge in the journey: finding an efficient method to simplify Boolean expressions. One issue with the initial solution was that transformations could be applied repeatedly, leading to the same expression appearing multiple times. To address this, we needed to store previously observed expressions in memory and check each time if an expression had already been seen. Modified Quine–McCluskey The Quine–McCluskey algorithm, commonly used in digital circuit design, offers a different approach with a finite number of steps. The explanation of the algorithm might seem daunting but in essence, we just apply the following reduction rule to a pair of expressions: (x ∧ y) ∨ (x ∧ ¬y) = x This allows the pair of expressions to be reduced into one. A challenge with the Quine–McCluskey algorithm is that it requires a list of prime implicants as input. An implicant of a Boolean expression is a combination of predicates that makes the expression true, essentially a row of the truth table where the Boolean expression is true. To obtain a list of prime implicants from a given Boolean expression, we need to calculate a truth table. This involves executing the expression (2^n) times, producing up to (2^n) implicants. For an expression with 10 predicates, we need to evaluate the expression 1024 times. With 20 predicates, the number of evaluations is 1,048,576. This is impractical. Implicants are similar to Disjunctive Normal Form (DNF) minterms, as both evaluate the expression to be true. We could use DNF instead of implicants, but here is another challenge: even though the expressions below are equivalent: (a ∧ b) ∨ (a ∧ ¬b) a ∨ (a ∧ ¬b) Only the first one is represented via implicants: (a ∧ b) is an implicant of the expressions, and can be simplified by QMC, while (a) is not, as it says nothing about the value of predicate (b). Absorption law Tackling the previous challenge, we face another one: now we need to find a way where we can compensate for the reduced power of QMC. Fortunately, this one can be resolved using Absorption Law: x ∨ (x ∧ y) = x With its help, we can simplify the expression from the previous example a ∨ (a ∧ ¬b) to just a. Effective bitset representation One proved to be good optimization is to use bitwise instructions on a bitset representation of Boolean expressions instead of working with the AST representation. This approach boosts performance by taking advantage of the speed and simplicity of bitwise operations, which are generally quicker and more straightforward than working with the more complex AST structure. Petrick’s method Can we do better and ensure the expression stays accurate while further reducing redundancy? For example, for the given expression: (¬a ∧ ¬b ∧ ¬c) ∨ (¬a ∧ b ∧ ¬c) ∨ (a ∧ ¬b ∧ ¬c) ∨ (¬a ∧ b ∧ c) After some QMC reduction steps are applied, we end up with the following expression: (¬a ∧ ¬c) ∨ (¬b ∧ ¬c) ∨ (¬a ∧ b) However, this can be reduced further down to: (¬b ∧ ¬c) ∨ (¬a ∧ b) This redundancy comes from the fact that when we reduce two terms into one - the new term sort of covers (or we can say represents) two original ones, and some of the derivative terms can be removed without breaking the “coverage” of the original expression. To find the minimal “coverage” we use Petrick’s method . The idea of this method is just to express the coverage as a Boolean expression and then find the minimal combination of its predicates which evaluate the expression to be true. It can be done by transforming the expression into DNF and picking up the smallest minterm. Boolean expressions in MongoDB In the previous section, we developed the algorithm for simple Boolean expressions. However, we can't use it to simplify filters in MongoDB just yet. The MongoDB Query Language (MQL) has unique logical semantics, particularly regarding negation, handling missing values, and array values. Logical operators MQL supports the following logical operators: $and - regular conjunction operator. $or - regular disjunction operator. $not - Negation operator with special semantics, see below for details $nor - Negation of disjunction, which is equivalent to conjunction of negations (DeMorgan Laws) The only specific operator is $nor, which can be easily converted to the conjunction of negations. MQL negation semantics MongoDB's flexible schema can lead to some fields being missing in documents. For instance, in a collection with documents {a: 1, b: 1} and {a: 2}, the field b is missing in the second document. When dealing with negations in MQL, missing values play an important role. Negations will match documents where the specified field is missing. Let's look at a collection of four documents and see how they answer to different queries (see Table 1): Table 1: &nbsp; Negating comparison expressions in MQL. As shown, the $not operator matches documents where the field "a" is missing. The $ne operator in MQL is only a syntactic sugar for {$not: {$eq: ...}}, which means it matches documents where the specified field is missing (see Table 2): Table 2: &nbsp; Negating Equality Expressions in MQL. Let’s prove that for MQL negations Boolean algebra laws still hold. To do that we need to introduce new notations: not* - MQL negation exists - exists A means that the field of predicate A does exist not-exists - is a short for not (exists A), and means that the field of predicate A does not exist Being short for not (exists), not-exists behaves in the same way as a normal not: not-exists(A and B) = not-exists(A) or non-exists(B) not-exists(A or B) = not-exists(A) and non-exists(B) The following equation is always true by the definition of MQL negation: not* A = not A or not (exists A) We also state that the predicate A cannot be true when its field is missing: A and not-exists A = false This is true even for operator {$exists: 0} in MQL: e.g. {$and: [{a: {$not: {$exists: 0}}}, {a: {$exists: 0}}]} always returns an empty set of documents. Internally {$exists: false} is implemented through negation of {$exists: 1}, hence the following statements are also true in MQL: not* (not-exists A) = exists A not* (exists A) = not-exists A Complement laws We need to prove that: A and not* A = false A or not* A = true The proof of the first Complement law: A and not* A = A and (not A or not-exists A) = (A and not A) or (A and not-exists A) = false or false = false The proof of the second Complement law: A or not* A = A or (not A or not-exists A) = A or not A or not-exists A = true DeMorgan laws Proof of the first DeMorgan law not* (A and B) = (not* A) or (not* B): not* (A and B) = not (A and B) or not-exists (A and B) = (not A or not B) or (not-exists A or not-exists B) = (not A or not-exists A) or (not B or not-exists B) = not* A or not* B The proof of the second DeMorgan law not* (A or B) = (not* A) and (not* B) is a bit more complicated: not* (A or B) = not (A or B) or not-exists (A or B) = (not A and not B) or (not-exists A and not-exists B) = (not A or not-exists A) and (not B or not-exists A) and (not A or not-exists B) and (not B or not-exists B) = ((not* A) and (not* B)) and ((not A and not B) or (not A and not-exist B) or (not B and not-exists A) or (not-exists A and not-exists B)) = ((not* A) and (not* B)) and ((not* A) and (not* B)) = (not* A) and (not* B) // because (not* A) and (not* B) = (notA and not B) or (not A and not-exist B) or (not B and not-exists A) or (not-exists A and not-exists B) Involution law We need to prove that not* (not* A) = A: not* (not* A) = not* (not A or not-exists A) = not* (not A) and not* (not-exists A) = (A or not-exists A) and (exists A) = (A and exists A) or (not-exists A and exists A) = A or false = A Here we used DeMorgan law for not* proved in the previous section. MQL array values semantics In MQL, array values behave as expected and don’t change the semantics of Boolean operators. However, they alter the semantics of operands within predicates. It’s helpful to think of it as implicitly adding the word “contains” to the operators interpretation: {a: {$eq: 7}} means “array a contains a value equal 7” {a: {$ne: 7}}, which is always equivalent to {a: {$not: {$eq: 7}}, means “array does not contain a value equal 7” {a: {$gt: 7}} - “array a contains a value greater than 7”. Since the Boolean simplifier focuses on the logical structure of expressions, rather than the internal meaning of predicates, it's irrelevant whether the value is an array or scalar. The one important difference with array values semantics is that they behave differently on interval simplifications. If for a scalar value the following interval transformations are always valid, and invalid for arrays: scalar > 100 && scalar <= 1 => scalar: () // empty interval scalar < 100 && scalar >= 1 => scalar: [1, 100) These differences are important for building Index Scan Bounds. However, as previously mentioned, they are not significant for the Boolean simplifier. Implementation considerations We faced several challenges during implementation. To achieve maximum performance, we created our own version of bitsets in C++. The existing options, such as boost::dynamic_bitset, were too slow, and std::bitset lacked flexibility. Memory management was crucial, so we worked hard to reduce memory allocations. This included using an optimized storage for our bitset and a C++ polymorphic allocator for some algorithms. We released the simplifier cautiously, enabling it only for specific types of expressions. We plan to expand the range of expressions that can be simplified in the future. We also set a limit on the maximum number of terms in DNF. Before each transformation, we estimate the number of resulting terms. If the number is too high, we cancel the simplification. Conclusion In this post, we described an effective algorithm to simplify Boolean expressions. We used bitsets to represent query filters, modified the Quine-McCluskey algorithm, and applied Petrick's method. Finally, we proved that the algorithm works with MQL's Boolean algebra. This journey began, as does so much at MongoDB, with a real-world customer example. The customer was struggling with a complex query, and we developed the Boolean Expression Simplifier to help them overcome their specific challenge—and ended up enhancing MongoDB's performance for any user dealing with intricate filters. As noted before, in one particularly demanding case involving large collections and selective indexes, the Simplifier helped achieve an 18,100% throughput improvement" This project reflects MongoDB’s longstanding commitment to turning customer challenges into successes, and our practice of taking what we learn from individual projects and making broad improvements. We’re always working to improve MongoDB’s performance, and the simplifier does this—it enables faster and more efficient query execution. Finally, this project is a good example of how MongoDB's culture of trust and ownership can lead to impact. Our engineers are empowered to innovate from concept to completion, transforming theoretical ideas like Boolean expression minimization into tangible gains that directly improve the customer experience. Join our MongoDB Community to learn about upcoming events, hear stories from MongoDB users, and connect with community members from around the world.

January 30, 2025
Engineering Blog

2024 William Zola Award for Community Excellence Recipient

We are thrilled to announce the 2024 recipient of MongoDB’s prestigious William Zola Award for Community Excellence: Mateus Leonardi! Each year, the William Zola Award for Community Excellence recognizes an outstanding community contributor who embodies the legacy of William Zola, a lead technical services engineer at MongoDB who passed away in 2014. Zola had an unwavering commitment to user success, and believed deeply in understanding and catering to users' emotions while resolving their technical problems. His philosophy is a guiding force for MongoDB’s community to this day. This award comes with a cash prize and travel sponsorship to attend a regional MongoDB.local event . Mateus Leonardi embodies these leadership values within the MongoDB community. In 2023, he led the launch and leadership of a new MongoDB User Group (MUG) in Florianópolis/Santa Catarina, Brazil. With his co-leaders, he has organized six successful MUG events, and has formed partnerships with local organizations and universities. He has also expanded his impact by becoming a MongoDB Community Creator , and has actively shared his MongoDB expertise outside of the user group. For example, in 2024 Mateus volunteered as a subject matter expert for MongoDB certifications. This involved being a panelist on several critical exam development milestones—from defining the knowledge and skills needed to earn credentials, to developing exam blueprints, to writing exam questions and verifying the technical accuracy of MongoDB certifications. Based on this record of support for our community, Mateus was also honored as a MongoDB Community Champion in 2024. Mateus excels as a dynamic leader and community advocate, extending invitations to fellow Brazilian MongoDB community leaders to speak at MUGs and local events. He also organizes and delivers talks at numerous events, including those for local university students. Ultimately, Mateus goes above and beyond in everything he commits to, and has demonstrated a genuine commitment to enhancing both the local MongoDB community and the broader developer community in his area. Here’s what Mateus Leonardi had to say about what the MongoDB community means to him: Q: Could you tell our readers a little about your day-to-day work? Mateus: As Head of Engineering at HeroSpark, my mission is to empower our team to innovate with quality and consistency. I work to create an environment where efficiency and constant evolution are natural in our day-to-day, always focusing on solutions that benefit both our team and our customers. Q: How does MongoDB support you as a developer? Mateus: MongoDB has been instrumental in our journey to enable adaptability without compromising quality. In work, MongoDB gives us that same flexibility in development, allowing us to quickly adapt to market changes while maintaining high performance and controlled costs. This combination gives us the confidence to innovate sustainably. Q: Why is being a leader in the MongoDB community important to you? Mateus: My sixteen-plus-year career has been marked by people who have helped me grow, and now it's my turn to give back. I view my role in the community as a way to multiply knowledge and create a positive impact. Technology has transformed my life, and through the MongoDB community, I can help others transform their realities too. Q: What has the community taught you? Mateus: The community has taught me that true learning happens when we share experiences. The act of teaching makes us better learners. Each interaction in the community allows us to reflect on our limitations and grow collectively. I've learned that the most rewarding thing is not the destination, but a journey shared with other developers. Congratulations to Mateus Leonardi, recipient of the 2024 William Zola Award for Community Excellence! To learn more about the MongoDB Community, please visit the MongoDB Community homepage .

January 29, 2025
News

The Ditto MongoDB Connector: Seamlessly Sync Edge and Cloud Data

Picture a delivery service that uses mobile devices for real-time tracking of packages, routes, and customer interactions. The mobile app on drivers' phones must sync with a cloud database (such as MongoDB Atlas ) to update customer addresses, schedules, and tracking info for backend processing. In remote areas with poor internet, instant syncing might not be possible, leading to inconsistencies and delays if field data doesn’t sync immediately. Simultaneous updates to both the mobile app and the cloud could also cause data conflicts, resulting in confusion and potential data loss. And this is just one example from hundreds. In the logistics & supply chain industry, companies depend on a vast array of edge sensors (like temperature-controlled shipping containers or GPS trackers) and need constant, real-time data updates. In the healthcare industry, patient data collected from wearables or remote monitoring devices must be instantly synced with cloud-based databases for analysis. In the finance industry, mobile banking and real-time trading platforms require instant data synchronization with cloud databases without delays for accurate decision-making. In sum, syncing mobile data in real-world environments is a challenge across industries. The solution: Ditto’s bidirectional connector for MongoDB To address these challenges, we’re excited to introduce the Ditto MongoDB Connector. This solution, a collaboration between Ditto and MongoDB, delivers robust, real-time, bidirectional data synchronization between local apps, Ditto Big Peer , a centralized synchronization engine, and MongoDB Atlas , a robust cloud-based non-relational database. By integrating these technologies, applications can operate effectively even when offline and synchronize changes into the cloud once connectivity is restored. This enhances user satisfaction and fosters trust in digital products. Seamless architecture for real-time synchronization Once the Ditto SDK is integrated into your application, each device running your app maintains a local database. Additionally, these devices automatically form a peer-to-peer (P2P) mesh network and synchronize using available transport protocols such as Bluetooth Low Energy (LE), Local Area Network (LAN), or peer-to-peer Wi-Fi (P2P). The Small Peers connect to the Ditto Big Peer, which serves as a centralized cloud datastore and synchronization engine. The Ditto MongoDB Connector works in tandem with Ditto Big Peer, facilitating data synchronization between a MongoDB Atlas cluster and Ditto Big Peer. It efficiently handles this synchronization by utilizing Conflict-free Replicated Data Types (CRDTs) to resolve conflicts and update both systems with accurate data, preserving changes even when documents are updated simultaneously in both MongoDB Atlas and Ditto. The data between devices stream through Ditto Big Peer using the Ditto MongoDB Connector before being stored in a MongoDB Atlas cluster. Conversely, any changes in a cluster are shared with devices via Ditto Big Peer. Figure 1. &nbsp;Distributed Data Synchronization Architecture for MongoDB Atlas and Ditto Big Peer. Telemetry data handling Real-time information flow is essential across various edge use cases, whether it involves synchronizing telemetry data from edge devices to the cloud or distributing user-initiated updates across devices. The Ditto MongoDB Connector ensures continuous telemetry data synchronization with minimal latency, thanks to a strategy of transferring only deltas—small changes—rather than entire datasets. This significantly improves synchronization speed and reduces data transfer, resulting in fast and seamless updates. Offline-first capabilities: Critical for Field Operations When a device loses connection to the cloud due to poor network conditions or intermittent connectivity, the Ditto MongoDB Connector ensures that changes made locally on the device are queued up and synced back to MongoDB once connectivity is restored. This guarantees data is never lost and ensures consistency across all devices and cloud systems. Robust data security and compliance Security and compliance are critical for modern enterprises, especially those handling sensitive data. By utilizing MongoDB Atlas, audited as SOC 2 Type 2 compliant , the Ditto MongoDB Connector assures data integrity and security across its applications. Whether in healthcare, finance, or other data-sensitive industries, organizations can trust that their data processes meet stringent regulatory standards. How it works in action Let’s come back to our example of a delivery driver updating a customer’s address on their mobile app. This change must be reflected not only on the driver’s device but also in the cloud-based MongoDB database. Ditto’s Bidirectional Connector ensures that updates are instantly synced both ways—any data entered or modified in the mobile app is mirrored in MongoDB, and vice versa. This guarantees consistency across all devices and cloud-based systems. Another example is a salesperson who often finds himself in areas with limited or no network connectivity. With Ditto, mobile apps can continue functioning even offline—storing data locally on the device. Once connectivity is restored, the changes are automatically synced with MongoDB Atlas, ensuring that the cloud database is updated without the risk of data loss or corruption. A seamless integration for developers and enterprises Integrating Ditto’s Bidirectional Connector for MongoDB into your system is simple and designed for scalability. Whether you’re building a new IoT solution or enhancing an existing one, this connector helps solve common data synchronization challenges with ease. Developers will appreciate the simplicity of use and flexibility of the connector, which automatically handles data synchronization and conflict resolution. Enterprises can rely on the scalability and robustness of the solution to ensure their edge devices are always connected, even in remote or low-network environments. Begin your journey The Ditto MongoDB Connector is now available and ready for integration into your systems! Whether you’re tackling edge-to-cloud data challenges in manufacturing, healthcare, or logistics, this new connector simplifies your workflow and ensures data consistency, even in the most demanding environments. Data integration between edge devices and the cloud doesn’t have to be complicated. With Ditto’s new Bidirectional Connector for MongoDB , you can ensure real-time, conflict-free synchronization across all your systems—empowering your business to make better, faster decisions with accurate data, wherever it resides. Want to learn more? Explore Ditto’s comprehensive migration guide to discover how Ditto and MongoDB Atlas enable edge data synchronization. Begin your journey with us today! Read more about Ditto on its MongoDB partner ecosystem page . Sign up for a proof-of-concept .

January 28, 2025
Applied

Away From the Keyboard: Ariel Hou, Staff Engineer

Welcome to our article series about developers and what they do when they’re not building incredible things with code and data. In “Away From the Keyboard,” MongoDB developers discuss what they do, how they keep a healthy work-life balance, and their advice for people seeking a more holistic approach to coding. In this article, Ariel Hou shares her day-to-day responsibilities as a Staff Engineer at MongoDB; how a pots-and-pans symphony helped her set boundaries while working from home; and the two rules she follows to separate work and personal time. Q: What do you do at MongoDB? Ariel: I work with the Atlas Growth Engineering teams, where our focus is on designing and implementing data-driven experiments on the Atlas product. We target metrics like customer acquisition, retention, feature discovery, etc. We not only run experiments ourselves; we also facilitate other teams in running them by creating tools and platforms, like our internal experimentation admin app and our Javascript experimentation SDK. In this role, I get to work on full-stack, cross-functional projects where sometimes the audience is Atlas users, and sometimes it's internal devs. There's a lot of variety! Q: What does work-life balance look like for you? Ariel: Generally, it means I do work during work hours, and then I don't do work when it’s not work hours. I try to enforce a physical separation as much as a mental one. Admittedly, the rise of work-from-home that came with the pandemic blurred the boundary some, and it was a bit of a tough adjustment at the beginning of the pandemic when I first started working at home. Since I was already home, there wouldn't be a clear signal that it was time to "go home"...until there was. Back then, people in Manhattan had taken to banging on pots and pans at around 6 p.m. to salute healthcare workers commuting home, and I could hear it from my apartment. That inadvertently acted as my alarm for the end of the work day! Q: How do you ensure you set boundaries between work and personal life? Ariel: The aforementioned physical boundary (for the days I go into the office). There are two other important concepts for me. First, I compartmentalize: sometimes, I will realize an important work-related thing in my off-hours, but rather than acting on it, I'll file it away for work time. The second idea, which goes hand-in-hand with the others: don't break the seal ! (AKA, don't trivially open your laptop or respond to Slack messages during off-hours.) The minute you do, the boundary is broken for the day and it becomes a slippery slope into working more. Q: Has work/life balance always been a priority for you, or did you develop it later in your career? Ariel: It was definitely not a priority in my first few years out of college. But living to work made me burnt out by 25. By the time I got to MongoDB, I understood that model wasn't sustainable, nor was it worthwhile. Thankfully, we don't have that culture here on the team. Q: What benefits has this balance given you in your career? Ariel: If you've been working for many hours straight, there's a certain point in the day where it's a struggle to progress the code or doc you're writing. You can try to push through it, but there's a high chance that the next day, after a night of sleep, you revisit the task and think, "Who wrote this garbage? Oh, heh, me." And you throw it all out and write a much nicer solution with much less effort. It's happened to me on numerous occasions. Work-life balance isn't just better for mental and physical longevity; it legitimately makes you a more effective engineer. Q: What advice would you give to someone seeking to find a better balance? Ariel: Don't break the seal! Thank you to Ariel Hou for sharing these insights! And thanks to all of you for reading. For past articles in this series, check out our interviews with: Senior AI Developer Advocate, Apoorva Joshi Developer Advocate Anaiya Raisinghani Senior Partner Marketing Manager Rafa Liou Staff Software Engineer Everton Agner Interested in learning more about or connecting more with MongoDB? Join our MongoDB Community to meet other community members, hear about inspiring topics, and receive the latest MongoDB news and events. And let us know if you have any questions for our future guests when it comes to building a better work-life balance as developers. Tag us on social media: @/mongodb #LoveYourDevelopers #AwayFromTheKeyboard

January 27, 2025
Culture

Securing Digital Transformation with MongoDB and RegData

Data security and privacy have long been paramount to the financial industry, but they are especially critical for institutions undergoing digital transformations or those implementing new technology. For example, the integration of artificial intelligence (AI) and machine learning (ML) into organizations’ infrastructure and offerings introduces security and privacy complexities, making it all the more essential for financial organizations to safeguard sensitive information while complying with regulations. The consequences of a data breach are extensive and significantly impactful. These incidents have transformed from simple cybersecurity concerns into catalysts for financial losses, reputational harm, legal challenges, regulatory penalties, and a significant decline in consumer trust. Even with an increased focus on data security, organizations must adopt modern data architecture to effectively mitigate these risks. For example, using a database solution like MongoDB with built-in encryption, role-based access control, and audit logging can help organizations safeguard sensitive data and respond proactively to potential vulnerabilities. Check out our AI Learning Hub to learn more about building AI-powered apps with MongoDB. The challenge of data security in finance Financial institutions face numerous challenges in protecting data integrity during modernization efforts. The increasing sophistication of cyberattacks, coupled with the need to comply with evolving regulations like the General Data Protection Regulation (GDPR) and the Digital Operational Resilience Act (DORA), creates a complex environment for data management. Institutions must also navigate technical sprawl, where diverse applications and data management systems complicate compliance and operational efficiency. Addressing these challenges requires a holistic approach that integrates data protection into the core design of digital transformation initiatives. Financial institutions need to adopt robust data management practices, ensure the encryption of sensitive data, and maintain vigilant cybersecurity measures. Collaboration with trusted third-party vendors, adopting a privacy-first strategy, and complying with global data protection regulations are essential steps toward safeguarding data privacy in this rapidly evolving digital landscape. Discover how the RegData Protection Suite (RPS), built on MongoDB , enables you to balance technological advancement with regulatory requirements. The solution: MongoDB and RegData MongoDB offers unparalleled reliability, scalability, and flexibility, making it an ideal choice for financial services. MongoDB enables financial institutions to combine operational and AI data in a unified interface and can be deployed on-premises with Enterprise Advanced or across any major cloud provider with MongoDB Atlas , multi-cloud, and hybrid cloud when needed. When combined with RegData's Protection Suite (RPS), organizations can effectively tackle the challenges of digital transformation. RPS is a cloud-native application security platform designed to protect sensitive data through advanced techniques such as encryption, anonymization, and tokenization. Figure 1. Simplified architecture of the RPS solution. Key Features of RegData Protection Suite: Core Configuration: Provides services and a user interface to configure the protection of data. RPS Engine: A sophisticated core engine equipped with various data protection tools. This module is the heart of the application and is responsible for all data protection. Consists of encryption, anonymization, tokenization, and pseudonymization RPS Reporting: A vital component focused on data protection oversight. It gathers and analyzes information on the business application activities protected by RPS to generate a range of valuable reports RPS Manager: Provides end-to-end monitoring capabilities for the components of the RPS platform. RPS Integration: RPS seamlessly integrates with various applications, ensuring that sensitive data is protected across diverse environments. The synergy between MongoDB and RegData shines through in practical applications. For instance, a private bank can leverage hybrid cloud deployments to modernize its operations while maintaining data security. By utilizing RPS, the bank can protect sensitive information during cloud migrations and ensure compliance with regulatory requirements. Additionally, as financial institutions explore outsourcing, RPS helps mitigate risks by anonymizing sensitive data, allowing organizations to maintain control over their data even when leveraging external service providers. Embracing a zero-trust approach for gen AI applications With the rise of AI (and particularly gen AI), banks are developing increasingly more AI- and gen AI-powered applications. While on-premise AI/gen AI model development and testing provides a high level of data security and confidentiality, it may not be within the bank’s budget to afford a production-grade GPU compute pool or one that is large enough to offer sufficient scalability and economy of scale. With this dilemma, banks have begun developing models in private clouds and then deploying on the public cloud to leverage its scalability and economy of scale. MongoDB can serve as that unified operational data layer for a variety of data sources, structured, semi-structured, or unstructured that may also come in different forms (eg. tabular, geospatial, network graph, time series, etc.) for the model development, training, fine-tuning and/or testing. When the model is tested and found to be working, it can then be deployed to the public cloud to serve the AI/gen AI applications. The figure below shows the high-level architecture of how a private bank implemented its gen AI application with MongoDB and RPS. Figure 2. Gen AI data flow architecture focused on data protection. The road to modernization As financial institutions navigate the complexities of digital transformation, the partnership between MongoDB and RegData offers a robust solution for securing data. By adopting a comprehensive data protection strategy, organizations can innovate confidently while ensuring compliance with regulatory standards. Embracing these technologies not only enhances data security but also paves the way for a more resilient and agile financial sector. Establishing a robust data architecture with a modern data platform like MongoDB Atlas enables financial institutions to effectively modernize by consolidating and analyzing data in any format in real-time, driving value-added services and features to consumers while ensuring privacy and security concerns are adequately addressed with built-in security controls across all data. Whether managed in a customer environment or through MongoDB Atlas, a fully managed cloud service, MongoDB ensures robust security with features such as authentication (single sign-on and multi-factor authentication), role-based access controls, and comprehensive data encryption. These security measures act as a safeguard for sensitive financial data, mitigating the risk of unauthorized access from external parties and providing organizations with the confidence to embrace AI and ML technologies. Are you prepared to harness these capabilities for your projects or have any questions about this? Then please reach out to us at industry.solutions@mongodb.com or nfo@regdata.ch . You can also take a look at the following resources: RegData & MongoDB: Securing Digital Transformation Streamline Data Control and Compliance with RegData & MongoDB Implementing an Operational Data Layer Want to learn more about why MongoDB is the best choice for supporting modern AI applications? Check out our on-demand webinar, “ Comparing PostgreSQL vs. MongoDB: Which is Better for AI Workloads? ” presented by MongoDB Field CTO, Rick Houlihan.

January 23, 2025
Applied

Ready to get Started with MongoDB Atlas?

Start Free