Artificial Intelligence

Building AI-powered Apps with MongoDB

Building Gen AI with MongoDB & AI Partners | November 2024

Unless you’ve been living under a rock, you know it’s that time of year again—re:Invent season! Last week, I was in Las Vegas for AWS re:Invent, one of our industry’s most important annual conferences. re:Invent 2024 was a whirlwind of keynote speeches, inspirational panels and talks, and myriad ways to spend time with colleagues and partners alike. And this year, MongoDB had its biggest re:Invent presence ever, alongside some of the most innovative players in AI. The headline? The MongoDB AI Application Program (MAAP) . Capgemini, Confluent, IBM, QuantumBlack AI by McKinsey, and Unstructured joined MAAP, boosting the value customers receive from the program and cementing MongoDB’s position as a leader in driving AI innovation. We also announced that MongoDB is collaborating with Meta to support developers with Meta models and the end-to-end MAAP technology stack. Figure 1: The MongoDB booth at re:Invent 2024 MongoDB’s re:Invent AI Showcase was another showstopper. As part of the AI Hub in the re:Invent expo hall, MongoDB and partners Arcee, Arize, Fireworks AI, and Together AI collaborated on engaging demos and presentations. Meanwhile, the “ Building Your AI Stack ” panel—which included leaders from MongoDB and MAAP partners Anyscale, Cohere, and Fireworks AI—featured an insightful discussion on building AI technologies, challenges with taking applications to production, and what’s next in AI. As at every re:Invent, networking opportunities abounded; I had so many interesting and fruitful conversations with partners, customers, and developers during the week’s many events, including those MongoDB sponsored—like the Cabaret of Innovation with Accenture, Anthropic, and AWS; the Galactic Gala with Cohere; and Tuesday’s fun AI Game Night with Arize, Fireworks AI, and Hasura. Figure 2: Networking at the Galactic Gala Whether building solutions or building relationships, MongoDB’s activities at re:Invent 2024 showcased the importance of collaboration to the future of AI. As we close out the year, I’d like to thank our amazing partners for their support—we look forward to more opportunities to collaborate in 2025! And if you want to learn more about MongoDB’s announcements at re:Invent 2024, please read this blog post by my colleague Oliver Tree. Welcoming new AI and tech partners In November, we also welcomed two new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Braintrust Braintrust is an end-to-end platform for building and evaluating world-class AI apps. “ We're excited to partner with MongoDB to share how you can build reliable and scalable AI applications with vector databases,” said Ankur Goyal, CEO of Braintrust. “By combining Braintrust’s simple evaluation workflows with MongoDB Atlas, developers can build an end-to-end RAG application and iterate on prompts and models without redeploying their code.” Langtrace Langtrace is an open-source observability tool that collects and analyzes traces in order to help you improve your LLM apps. “ We're thrilled to join forces with MongoDB to help companies trace, debug, and optimize their RAG features for faster production deployment and better accuracy,” said Karthik Kalyanaraman, Co-founder and CTO at Langtrace AI. “MongoDB has made it dead simple to launch a scalable vector database with operational data. Our collaboration streamlines the RAG development process by empowering teams with database observability, speeding up time to market and helping companies get real value to customers faster.” But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

December 12, 2024
Artificial Intelligence

Binary Quantization & Rescoring: 96% Less Memory, Faster Search

We are excited to share that several new vector quantization capabilities are now available in public preview in MongoDB Atlas Vector Search : support for binary quantized vector ingestion, automatic scalar quantization, and automatic binary quantization and rescoring. Together with our recently released support for scalar quantized vector ingestion , these capabilities will empower developers to scale semantic search and generative AI applications more cost-effectively. For a primer on vector quantization, check out our previous blog post . Enhanced developer experience with native quantization in Atlas Vector Search Effective quantization methods—specifically scalar and binary quantization—can now be done automatically in Atlas Vector Search. This makes it easier and more cost-effective for developers to use Atlas Vector Search to unlock a wide range of applications, particularly those requiring over a million vectors. With the new “quantization” index definition parameters, developers can choose to use full-fidelity vectors by specifying “none,” or they can quantize vector embeddings by specifying the desired quantization type—”scalar” or “binary” (Figure 1). This native quantization capability supports vector embeddings from any model provider as well as MongoDB’s BinData float32 vector subtype . Figure 1: New index definition parameters for specifying automatic quantization type in Atlas Vector Search Scalar quantization—converting a float point into an integer—is generally used when it's crucial to maintain search accuracy on par with full-precision vectors. Meanwhile, binary quantization—converting a float point into a single bit of 0 or 1—is more suitable for scenarios where storage and memory efficiency are paramount, and a slight reduction in search accuracy is acceptable. If you’re interested in learning more about this process, check out our documentation . Binary quantization with rescoring: Balance cost and accuracy Compared to scalar quantization, binary quantization further reduces memory usage, leading to lower costs and improved scalability—but also a decline in search accuracy. To mitigate this, when “binary” is chosen in the “quantization” index parameter, Atlas Vector Search incorporates an automatic rescoring step, which involves re-ranking a subset of the top binary vector search results using their full-precision counterparts, ensuring that the final search results are highly accurate despite the initial vector compression. Empirical evidence demonstrates that incorporating a rescoring step when working with binary quantized vectors can dramatically enhance search accuracy, as shown in Figure 2 below. Figure 2: Combining binary quantization and rescoring helps retain search accuracy by up to 95% And as Figure 3 shows, in our tests, binary quantization reduced processing memory requirement by 96% while retaining up to 95% search accuracy and improving query performance. Figure 3: Improvements in Atlas Vector Search with the use of vector quantization It’s worth noting that even though the quantized vectors are used for indexing and search, their full-fidelity vectors are still stored on disk to support rescoring. Furthermore, retaining the full-fidelity vectors enables developers to perform exact vector search for experimental, high-precision use cases, such as evaluating the search accuracy of quantized vectors produced by different embedding model providers, as needed. For more on evaluating the accuracy of quantized vectors, please see our documentation . So how can developers make the most of vector quantization? Here are some example use cases that can be made more efficient and scaled effectively with quantized vectors: Massive knowledge bases can be used efficiently and cost-effectively for analysis and insight-oriented use cases, such as content summarization and sentiment analysis. Unstructured data like customer reviews, articles, audio, and videos can be processed and analyzed at a much larger scale, at a lower cost and faster speed. Using quantized vectors can enhance the performance of retrieval-augmented generation (RAG) applications. The efficient processing can support query performance from large knowledge bases, and the cost-effectiveness advantage can enable a more scalable, robust RAG system, which can result in better customer and employee experience. Developers can easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s flexible document model lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. The relevance of search results or context for large language models (LLMs) can be improved by incorporating larger volumes of vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. To get started with vector quantization in Atlas Vector Search, see the following developer resources: Documentation: Vector Quantization in Atlas Vector Search Documentation: How to Measure the Accuracy of Your Query Results Tutorial: How to Use Cohere's Quantized Vectors to Build Cost-effective AI Apps With MongoDB

December 12, 2024
Artificial Intelligence

IntellectAI Unleashes AI at Scale With MongoDB

IntellectAI , a business unit of Intellect Design Arena , is a trailblazer in AI. Since 2019 the company has been using MongoDB to drive a number of innovative use cases in the banking, financial services, and insurance (BFSI) industry. For example, Intellect Design Arena’s broader insurance business has been using MongoDB Atlas as a foundation for its architecture. Atlas’s flexibility enables Intellect Design Arena to manage varied and constantly evolving datasets and increase operational performance. Building on this experience, the company looked at deepening its use of MongoDB Atlas’s unique AI and search capabilities for its new IntellectAI division. IntellectAI Partner and Chief Technology Officer Deepak Dastrala spoke on the MongoDB.local Mumbai stage in September 2024 . Dastrala shared how the company has built a powerful, scalable, and highly accurate AI platform-as-a-service offering, Purple Fabric , using MongoDB Atlas and Atlas Vector Search . Using AI to generate actionable compliance insights for clients Purple Fabric helps transform enterprise data into actionable AI insights and solutions by making data ready for retrieval-augmented generation (RAG). The platform collects and analyzes structured and unstructured enterprise data, policies, market data, regulatory information, and tacit knowledge to enable its AI Expert Agent System to achieve precise, goal-driven outcomes with accuracy and speed. A significant part of IntellectAI’s work involves assessing environmental, social, and governance (ESG) compliance. This requires companies to monitor diverse nonfinancial factors such as child labor practices, supply chain ethics, and biodiversity. “Historically, 80% to 85% of AI projects fail because people are still worried about the quality of the data. With Generative AI, which is often unstructured, this concern becomes even more significant,” said Deepak Dastrala. According to Deepak Dastrala, the challenge today is less about building AI tools than about operationalizing AI effectively. A prime example of this is IntellectAI’s work with one of the largest sovereign wealth funds in the world, which manages over $1.5 trillion across 9,000 companies. The fund sought to utilize AI for making responsible investment decisions based on millions of unique data points across those companies, including compliance, risk prediction, and impact assessment. This included processing both structured and unstructured data to enable the fund to make informed, real-time decisions. “We had to process almost 10 million documents in more than 30 different data formats—text and image—and correlate both structured and unstructured data to provide those particular hard-to-find insights,” said Dastrala. “We ingested hundreds of millions of vectors across these documents, and this is where we truly understood the power of MongoDB.” For example, by leveraging MongoDB's capabilities, including time series collections, IntellectAI simplifies the processing of unstructured and semi-structured data from companies' reports over various years, extracting key performance metrics and trends to enhance compliance insights. “MongoDB Atlas and Vector Search give us flexibility around the schema and how we can turn particular data into knowledge,” Dastrala said. For Dastrala, there are four unique advantages of working with MongoDB—particularly using MongoDB Atlas Vector Search—that other companies should consider when building long-term AI strategies: a unified data model, multimodality, dynamic data linking, and simplicity. “For me, the unified data model is a really big thing because a stand-alone vector database will not help you. The kind of data that you will continue to ingest will increase, and there are no limits. So whatever choices that you make, you need to make the choices from the long-term perspective,” said Dastrala. Delivering massive scale, driving more than 90% AI accuracy, and accelerating decision-making with MongoDB Before IntellectAI built this ESG capability, its client relied on subject matter experts, but they could examine only a limited number of companies and datasets and were unable to scale their investigation of portfolios or information. “If you want to do it at scale, you need proper enterprise support, and that’s where MongoDB became really handy for us. We are able to give 100% coverage and do what the ESG analysts were able to do for this organization almost a thousand times faster,” said Dastrala. Previously, analysts could examine only between 100 and 150 companies. With MongoDB Atlas and Atlas Vector Search, Purple Fabric can now process information from over 8,000 companies across the world, covering different languages and delivering more than 90% accuracy. “Generally, RAG will probably give you 80% to 85% accuracy. But in our case, we are talking about a fund deciding whether to invest billions or not in a company, so the accuracy should be 90% minimum,” said Dastrala. “What we are doing is not ‘simple search’; it is very contextual, and MongoDB helps us provide that high-dimension data.” Concluding the presentation speech on the MongoDB.local stage, Dastrala reminded the audience why IntellectAI is using MongoDB’s unique capabilities to support its long-term vision: “Multimodality is very important because today we are using text and images, but tomorrow we might use audio, video, and more. And don’t forget, from a developer perspective, how important it is to keep the simplicity and leverage all the options that MongoDB provides.” This is just the beginning for IntellectAI and its Purple Fabric platform. “Because we are doing more and more with greater accuracy, our customers have started giving us more problems to solve. And this is absolutely happening at a scale [that] is unprecedented,” said Dastrala. Using MongoDB Atlas to drive broader business benefits across Intellect Design The success encountered with the Purple Fabric platform is leading Intellect Design’s broader business to look at MongoDB Atlas for more use cases. Intellect Design is currently in the process of migrating more of its insurance and Wealth platforms onto MongoDB Atlas, as well as leveraging the product family to support the next phase of its app modernization strategy. Using MongoDB Atlas, Intellect Design aims to improve resilience, support scalable growth, decrease time to market, and enhance data insights. Head over to our product page to learn more about MongoDB Atlas . To learn more about how MongoDB Atlas Vector Search can help you build or deepen your AI and search capabilities, visit our Vector Search page .

December 12, 2024
Artificial Intelligence

AI-Powered Call Centers: A New Era of Customer Service

Customer satisfaction is critical for insurance companies. Studies have shown that companies with superior customer experiences consistently outperform their peers. In fact, McKinsey found that life and property/casualty insurers with superior customer experiences saw a significant 20% and 65% increase in Total Shareholder Return , respectively, over five years. A satisfied customer is a loyal customer. They are 80% more likely to renew their policies, directly contributing to sustainable growth. However, one major challenge faced by many insurance companies is the inefficiency of their call centers. Agents often struggle to quickly locate and deliver accurate information to customers, leading to frustration and dissatisfaction. This article explores how Dataworkz and MongoDB can transform call center operations. By converting call recordings into searchable vectors (numerical representations of data points in a multi-dimensional space), businesses can quickly access relevant information and improve customer service. We'll dig into how the integration of Amazon Transcribe, Cohere, and MongoDB Atlas Vector Search—as well as Dataworkz's RAG-as-a-service platform— is achieving this transformation. From call recordings to vectors: A data-driven approach Customer service interactions are goldmines of valuable insights. By analyzing call recordings, we can identify successful resolution strategies and uncover frequently asked questions. In turn, by making this information—which is often buried in audio files— accessible to agents, they can give customers faster and more accurate assistance. However, the vast volume and unstructured nature of these audio files make it challenging to extract actionable information efficiently. To address this challenge, we propose a pipeline that leverages AI and analytics to transform raw audio recordings into vectors as shown in Figure 1: Storage of raw audio files: Past call recordings are stored in their original audio format Processing of the audio files with AI and analytics services (such as Amazon Transcribe Call Analytics ): speech-to-text conversion, summarization of content, and vectorization Storage of vectors and metadata: The generated vectors and associated metadata (e.g., call timestamps, agent information) are stored in an operational data store Figure 1: Customer service call insight extraction and vectorization flow Once the data is stored in vector format within the operational data store, it becomes accessible for real-time applications. This data can be consumed directly through vector search or integrated into a retrieval-augmented generation (RAG) architecture, a technique that combines the capabilities of large language models (LLMs) with external knowledge sources to generate more accurate and informative outputs. Introducing Dataworkz: Simplifying RAG implementation Building RAG pipelines can be cumbersome and time-consuming for developers who must learn yet another stack of technologies. Especially in this initial phase, where companies want to experiment and move fast, it is essential to leverage tools that allow us to abstract complexity and don’t require deep knowledge of each component in order to experiment with and realize the benefits of RAG quickly. Dataworkz offers a powerful and composable RAG-as-a-service platform that streamlines the process of building RAG applications for enterprises. To operationalize RAG effectively, organizations need to master five key capabilities: ETL for LLMs: Dataworkz connects with diverse data sources and formats, transforming the data to make it ready for consumption by generative AI applications. Indexing: The platform breaks down data into smaller chunks and creates embeddings that capture semantics, storing them in a vector database. Retrieval: Dataworkz ensures the retrieval of accurate information in response to user queries, a critical part of the RAG process. Synthesis: The retrieved information is then used to build the context for a foundational model, generating responses grounded in reality. Monitoring: With many moving parts in the RAG system, Dataworkz provides robust monitoring capabilities essential for production use cases. Dataworkz's intuitive point-and-click interface (as seen in Video 1) simplifies RAG implementation, allowing enterprises to quickly operationalize AI applications. The platform offers flexibility and choice in data connectors, embedding models, vector stores, and language models. Additionally, tools like A/B testing ensure the quality and reliability of generated responses. This combination of ease of use, optionality, and quality assurance is a key tenet of Dataworkz's "RAG as a Service" offering. Diving deeper: System architecture and functionalities Now that we’ve looked at the components of the pre-processing pipeline, let’s explore the proposed real-time system architecture in detail. It comprises the following modules and functions (see Figure 2): Amazon Transcribe , which receives the audio coming from the customer’s phone and converts it into text. Cohere ’s embedding model, served through Amazon Bedrock , vectorizes the text coming from Transcribe. MongoDB Atlas Vector Search receives the query vector and returns a document that contains the most semantically similar FAQ in the database. Figure 2: System architecture and modules Here are a couple of FAQs we used for the demo: Q: “Can you explain the different types of coverage available for my home insurance?” A: “Home insurance typically includes coverage for the structure of your home, your personal belongings, liability protection, and additional living expenses in case you need to temporarily relocate. I can provide more detailed information on each type if you'd like.” Q: “What is the process for adding a new driver to my auto insurance policy?" A: “To add a new driver to your auto insurance policy, I'll need some details about the driver, such as their name, date of birth, and driver's license number. We can add them to your policy over the phone, or you can do it through our online portal.” Note that the question is reported just for reference, and it’s not used for retrieval. The actual question is provided by the user through the voice interface and then matched in real-time with the answers in the database using Vector Search. This information is finally presented to the customer service operator in text form (see Fig. 3). The proposed architecture is simple but very powerful, easy to implement, and effective. Moreover, it can serve as a foundation for more advanced use cases that require complex interactions, such as agentic workflows , and iterative and multi-step processes that combine LLMs and hybrid search to complete sophisticated tasks. Figure 3: App interface, displaying what has been asked by the customer (left) and how the information is presented to the customer service operator (right) This solution not only impacts human operator workflows but can also underpin chatbots and voicebots, enabling them to provide more relevant and contextual customer responses. Building a better future for customer service By seamlessly integrating analytical and operational data streams, insurance companies can significantly enhance both operational efficiency and customer satisfaction. Our system empowers businesses to optimize staffing, accelerate inquiry resolution, and deliver superior customer service through data-driven, real-time insights. To embark on your own customer service transformation, explore our GitHub repository and take advantage of the Dataworkz free tier .

November 27, 2024
Artificial Intelligence

Better Digital Banking Experiences with AI and MongoDB

Interactive banking represents a new era in financial services where customers engage with digital platforms that anticipate, understand, and meet their needs in real-time. This approach encompasses AI-driven technologies such as chatbots, virtual assistants, and predictive analytics that allow banks to enhance digital self-service while delivering personalized, context-aware interactions. According to Accenture’s 2023 consumer banking study , 44% of consumers aged 18-44 reported difficulty accessing human support when needed, underscoring the demand for more responsive digital solutions that help bridge this gap between customers and financial services. Generative AI technologies like chatbots and virtual assistants can fill this need by instantly addressing inquiries, providing tailored financial advice, and anticipating future needs. This shift has tremendous growth potential; the global chatbot market is expected to grow at a CAGR of 23.3% from 2023 to 2030 , with the financial sector experiencing the fastest growth rate of 24.0%. This shift is more than just a convenience; it aims to create a smarter, more engaging, and intuitive banking journey for every user. Simplifying self-service banking with AI Navigating daily banking activities like transfers, payments, and withdrawals can often raise immediate questions for customers: “Can I overdraft my account?” “What will the penalties be?” or “How can I avoid these fees?” While the answers usually lie within the bank’s terms and conditions, these documents are often dense, complex, and overwhelming for the average user. At the same time, customers value their independence and want to handle their banking needs through self-service channels, but wading through extensive fine print isn't what they signed up for. By integrating AI-driven advisors into the digital banking experience, banks can provide a seamless, in-app solution that delivers instant, relevant answers. This removes the need for customers to leave the app to sift through pages of bank documentation in search of answers, or worse, endure the inconvenience of calling customer service. The result is a smoother and user-friendly interaction, where customers feel supported in their self-service journey, free from the frustration of navigating traditional, cumbersome information sources. The entire experience remains within the application, enhancing convenience and efficiency. Solution overview This AI-driven solution enhances the self-service experience in digital banking by applying Retrieval-Augmented Generation (RAG) principles, which combine the power of generative AI with reliable information retrieval, ensuring that the chatbot provides accurate, contextually relevant responses. The approach begins by processing dense, text-heavy documents, like terms and conditions, often the source of customer inquiries. These documents are divided into smaller, manageable chunks vectorized to create searchable data representations. Storing these vectorized chunks in MongoDB Atlas allows for efficient querying using MongoDB Atlas Vector Search , making it possible to instantly retrieve relevant information based on the customer’s question. Figure 1: Detailed solution architecture When a customer inputs a question in the banking app, the system quickly identifies and retrieves the most relevant chunks using semantic search. The AI then uses this information to generate clear, contextually relevant answers within the app, enabling a smooth, frustration-free experience without requiring customers to sift through dense documents or contact support. Figure 2: Leafy Bank mock-up chatbot in action How MongoDB supports AI-driven banking solutions MongoDB offers unique capabilities that empower financial institutions to build and scale AI-driven applications. Unified data model for flexibility: MongoDB’s flexible document model unifies structured and unstructured data, creating a consistent dataset that enhances the AI’s ability to understand and respond to complex queries. This model enables financial institutions to store and manage customer data, transaction history, and document content within a single system, streamlining interactions and making AI responses more contextually relevant. Vector search for enhanced querying: MongoDB Atlas Vector Search makes it easy to perform semantic searches on vectorized document chunks, quickly retrieving the most relevant information to answer user questions. This capability allows the AI to find precise answers within dense documents, enhancing the self-service experience for customers. Scalable integration with AI models: MongoDB is designed to work seamlessly with leading AI frameworks, allowing banks to integrate and scale AI applications quickly and efficiently. By aligning MongoDB Atlas with cloud-based LLM providers, banks can use the best tools available to interpret and respond to customer queries accurately, meeting demand with responsive, real-time answers. High performance and cost efficiency: MongoDB’s multi-cloud, developer-friendly platform allows financial institutions to innovate without costly infrastructure changes. It’s built to scale as data and AI needs to grow, ensuring banks can continually improve the customer experience with minimal disruptions. MongoDB’s built-in scalability allows banks to expand their AI capabilities effortlessly, offering a future-proof foundation for digital banking. Building future-proof applications Implementing generative AI presents several advantages, not only for end-users of the interactive banking applications but also for financial institutions: Enhanced user experience encourages customer satisfaction, ensures retention, boosts reputation, and reduces customer turnover while unlocking new opportunities for cross-selling and up-selling to increase revenue, drive growth and elevate customer value. Moreover, adopting AI-driven initiatives prepares the groundwork for businesses to develop innovative, creative, and future-proof applications to address customer needs and upgrade business applications with features that are shaping the industry and will continue to do so, here are some examples: Summarize and categorize transactional information by powering applications with MongoDB’s Real-Time Analytics . Understand and find trends based on customer behavior that could positively impact and leverage fraud prevention , anti-money laundering (AML) , and credit card application (just to mention a few). Offering investing, budgeting, and loan assessments through AI-powered conversational banking experience. In today’s data-driven world, companies face increasing pressure to stay ahead of rapid technological advancements and ever-evolving customer demands. Now more than ever, businesses must deliver intuitive, robust, and high-performing services through their applications to remain competitive and meet user expectations. Luckily, MongoDB provides businesses with comprehensive reference architectures for building generative AI applications, an end-to-end technology stack that includes integrations with leading technology providers, professional services, and a coordinated support system through the MongoDB AI Applications Program (MAAP) . By building AI-enriched applications with the leading multi-cloud developer data platform, companies can leverage low-cost, efficient solutions through MongoDB’s flexible and scalable document model which empowers businesses to unify real-time, operational, unstructured, and AI-related data, extending and customizing their applications to seize upcoming technological opportunities. Check out these additional resources to get started on your AI journey with MongoDB: How Leading Industries are Transforming with AI and MongoDB Atlas - E-book Our Solutions Library is where you can learn about different use cases for gen AI and other interesting topics that are applied to financial services and many other industries.

November 26, 2024
Artificial Intelligence

Hanabi Technologies Uses MongoDB to Power AI Assistant, Hana

For all the hype surrounding generative AI, cynics tend to view the few real-world implementations as little more than “fancy chatbots.” But for Abhinav Aggarwal, CEO of Hanabi Technologies , the idea of a generative AI-powered bot that is more than just an assistant was intriguing. “I’d been using ChatGPT since it launched,” said Aggarwal. “That got me thinking: How could we make a chatbot that was like a team member?” And with that concept, Hana was born. The problem with bots “Most generative AI chatbots do not act like people; they wait for a command and give a response,” said Aggarwal. “We wanted to create a human-like chatbot that would proactively help people based on what they wanted—automating reminders, for example, or fetching time zones from your calendar to correctly schedule meetings.” Hanabi’s flagship product, Hana, is an AI assistant designed to enhance team collaboration within Google Chat, working in concert with Google Workspace and its suite of products. “Our target customers are smaller companies of between 10 and 50 people. At this size you’re not going to build your own agent from scratch,” he said. Hana integrates with Google APIs to deliver a human-like assistant that chimes in with helpful interventions, such as automatically setting reminders and making sure meetings are booked in the right time zone for each participant. “Hana is designed to bring AI to smaller companies and help them collaborate in a space where they are already working—Google Workspace,” Aggarwal explained. The MongoDB Atlas solution For Hana to act like a member of the team, Hanabi needed to process massive amounts of data to support advanced features like retrieval-augmented generation (RAG) for better information retrieval across Google Docs and many other sources. And with a rapidly growing user base of over 600 organizations and 17,000+ installs, Hanabi also required a secure, scalable, and high-performing data storage solution. MongoDB Atlas provided a flexible document model, built-in vector database, and scalable cloud-based infrastructure, freeing Hanabi engineers to build new features for Hana rather than focusing on rote tasks like data extract, transform, and load processes or manual scaling and provisioning. Now, MongoDB Atlas handles a variety of responsibilities: Scalability and security: MongoDB Atlas’s auto-scaling and automatic backup features have enabled Hanabi to seamlessly grow its user base without the need for manual database management. RAG: MongoDB Atlas plays a critical role in Hana’s RAG functionality. The platform enables Hanabi to split Google Docs into small sections, create embeddings, and store these sections in Atlas’s vector database. Development Processes: According to Aggarwal, MongoDB’s flexibility in managing changing schemas has been essential to the company’s fast-paced development cycle. Data Visualization: Using MongoDB Atlas Charts has enabled Hanabi to create comprehensive dashboards for real-time data visualization. This has helped the team track usage, set reminders, and optimize performance without needing to build a manual dashboard. Impact and results With MongoDB Atlas, Hanabi can successfully scale Hana to meet the demands of its rapidly expanding user base. The integration is also enabling Hana to offer powerful features like automatic interactions with customers, advanced information retrieval from Google Docs, and manually added memory snippets, making it an essential tool for teams around the world. Next steps Hanabi plans to continue integrating more tools into Hana while expanding its reach to personal Gmail users. The company is also rolling out a new automatic-interaction feature, further enhancing Hana’s ability to proactively assist users without direct commands. MongoDB Atlas remains a key component of Hanabi’s stack, alongside Google Kubernetes Engine, NestJS, and LangChain, enabling Hanabi to focus on innovating to improve the customer experience. Tech Stack MongoDB Atlas Google Kubernetes Engine NestJS LangChain Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free MongoDB Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com.

November 21, 2024
Artificial Intelligence

MongoDB, Microsoft Team Up to Enhance Copilot in VS Code

As modern applications grow increasingly complex, developers face the challenge of meeting market demands for faster, smarter solutions. To stay ahead, they need tools that streamline their workflows, available directly in the environments where they build. According to the 2024 Stack Overflow Developer Survey , Microsoft’s Visual Studio Code (VS Code) is the integrated development environment (IDE) of choice for 74% of professional developers, serving as a central hub for building, testing, and deploying applications. With the rise of AI-powered tools like GitHub Copilot—which is used by 44% of professional developers—there’s a growing demand for intelligent assistance in the development process without disrupting flow. At MongoDB, we believe that the future of development lies in democratizing the value of these experiences by incorporating domain-specific knowledge and capabilities directly into developer flows. That’s why we’re thrilled to announce the public preview of MongoDB’s extension to GitHub Copilot in VS Code. With this integration, developers can effortlessly generate MongoDB queries, inspect collection schemas, and get answers from the latest MongoDB docs—all without leaving their IDE. Our collaboration with MongoDB continues to bring powerful, integrated solutions to developers building the modern applications of the future. The new MongoDB extension for GitHub Copilot exemplifies a shared commitment to the developer experience, leveraging AI to ensure that workflows are optimized for developer productivity by keeping everything developers need within reach, without breaking their flow. Isidor Nikolic, Senior Product Manager for VS Code, Microsoft But we’re not stopping there. As AI continues to evolve, so will the ways developers interact with their tools. Stay tuned for more exciting developments next week at Microsoft Ignite , where we’ll unveil more ways we’re pushing the boundaries of what’s possible with AI through MongoDB and Microsoft’s partnership! What is MongoDB's Copilot extension? MongoDB’s Copilot extension supercharges your GitHub Copilot in VS Code with MongoDB domain knowledge. The Copilot integration is built into the MongoDB for VS Code extension , which has more than 1.8M downloads in the VS Code marketplace today. Type ‘@MongoDB’ in Copilot chat and take advantage of three transformative commands: Generate queries from natural language (/query) —this generates accurate MongoDB queries by passing collection schema as context to Github Copilot Query MongoDB documentation (/docs) —this answers any documentation questions using the latest MongoDB documentation through Retrieval-Augmented Generation (RAG) Browse collection schema (/schema) —this provides schema information for any collection and is useful for data modeling with the Copilot extension. Generate queries from natural language This command transforms natural language prompts into MongoDB queries, leveraging your collection schema to produce precise, valid queries. It eliminates the need to manually write complex query syntax, and allows developers to quickly extract data without taking their focus away from building applications. Whether you run the query directly from the Copilot chat or refine it in a MongoDB playground file, we’ve sped up the query-building process by deeply integrating these capabilities into the existing flow of MongoDB VS Code extension. Query MongoDB documentation The /docs command answers MongoDB documentation-specific questions, complemented by direct links to the official documentation site. There’s no need to switch back and forth between your browser and your IDE; the Copilot extension calls out to the MongoDB Documentation Chatbot API that leverages retrieval-augmented generation technology to generate responses that are informed by the most recent version of the MongoDB documentation. In the near future, these questions will be smartly routed to documentation for the specific server version of the cluster you are connected to in the MongoDB VS Code extension. Browse collection schema The /schema command offers quick access to collection schemas, making it easier for developers to access and interact with their data model in real-time. This can be helpful in situations where developers are debugging with Copilot or just want to know valid field names while developing their applications. Developers can additionally export collection schemas into JSON files or ask follow-up questions directly to brainstorm data modeling techniques with the MongoDB Copilot extension. On the Horizon This is just the start of our work on MongoDB’s Copilot extension. As we continue to improve the experience with new features—like translating and testing queries to and from popular programming languages, and in-line query generation in Playgrounds—we remain focused on democratizing AI-driven workflows, empowering developers to access the tools and knowledge they need to build smarter, faster, and more efficiently, right within their existing environments. Download MongoDB’s VS Code extension and enable the MongoDB chat experience to get started today.

November 13, 2024
Artificial Intelligence

Building Gen AI with MongoDB & AI Partners | October 2024

It’s no surprise that AI is a topic of seemingly every professional conversation and meeting nowadays—my friends joke that 11 out of 10 words that come out of my mouth are “gen AI.” But an important question remains: do organizations truly know how to harness AI, or do they simply feel pressured to join the crowd? Are they driven by FOMO more than anything else? One thing is for sure: adopting generative AI still presents a huge learning curve. Which is why we’ve been working to provide the right tools for companies to build innovative gen AI apps with, and why we offer organizations a variety of AI knowledge and guidance, regardless of where they are with gen AI. We’re fortunate to work with our industry-leading partners to help educate and shape this nascent market. Working so closely with them on product launches, integrations, and solving real-world challenges allows us to bring diverse perspectives and a better understanding of AI to our customers, giving them the technology and confidence to move forward even before engaging with tough use cases and specific technical problems (something that the MongoDB AI Applications Program can definitely help with). One of our main educational initiatives has been our webinar series with our top-tier MAAP partners. We’ve constantly launched video content to deepen understanding of topics essential to gen AI for enterprises answering broader questions such as “ how can my company generate AI-driven outcomes ” and “ how can I modernize my workload ,” to specific, tangible topics such as “ how to build a chatbot that knows my business .” Each session is designed to move beyond the basics, sharing insights from experts in AI, and addressing our customers’ burning questions and challenges that matter most to them. Welcoming new AI and tech partners In October, we also welcomed four new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Astronomer Astronomer empowers data teams to bring mission-critical software, analytics, and AI to life and is the company behind Astro, the industry-leading data orchestration and observability platform powered by Apache Airflow. " Astronomer's partnership with MongoDB is redefining RAG workflows for GenAI workloads. By integrating Astronomer's managed Apache Airflow platform with MongoDB Atlas' powerful vector database capabilities, we enable organizations to orchestrate complex data pipelines that fuel advanced AI and machine learning applications”, said Julian LaNeve, CTO at Astronomer. “This collaboration empowers data teams to manage real-time, high-dimensional data with ease, accelerating the journey from raw data to actionable insights and transforming how businesses harness the power of generative AI." CloudZero CloudZero is a cloud cost optimization platform that automates the collection, allocation, and analysis of cloud costs to identify savings opportunities and improve cloud efficiency rates. "Database spending is one of the shared costs that can make it tricky for organizations to reach 100% cost allocation. CloudZero eliminates that problem," said Anand Sundaram, Senior Vice President of Product at CloudZero. “ Our industry-leading allocation engine can organize MongoDB spend in a matter of hours , tracing it precisely to the products, features, customers, and/or teams responsible for it. This way, companies get a clear view of what’s driving their costs, who’s accountable, and how to optimize to maximize their cloud efficiency.” ObjectBox ObjectBox is an on-device vector database for mobile, IoT, and embedded devices that enables storing, syncing, and querying data locally online and offline. " We’re thrilled to partner with MongoDB to give developers an edge,” celebrated Vivien Dollinger, CEO and co-founder of ObjectBox. “By combining MongoDB’s cloud and scalability with ObjectBox’s high-performance on-device database and data sync, we empower developers to build fast, data-rich applications that feel right at home across devices and environments. Offline, online, edge, cloud, whenever, wherever... We’re here to enable your data with speed and reliability." Rasa Rasa is a flexible framework for building conversational AI platforms that lets companies develop scalable generative AI assistants that hit the market faster. “ Rasa is excited to partner with MongoDB to empower companies in building conversational AI experiences. Together, we’re helping create generative AI assistants that save costs, speed up development, and maintain full brand control and security,” said Melissa Gordon, CEO of Rasa. “With MongoDB, deploying production-ready generative AI assistants is seamless, and we’re eager to continue accelerating our customers’ journey toward trusted conversational AI solutions.” But wait, there's more! Whether you’re starting out or scaling up, MongoDB and our partners are here with the resources, expertise, and trusted guidance to help you succeed in your genAI strategy! And if you have any suggestions for a good webinar topic, don’t hesitate to reach out. To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

November 11, 2024
Artificial Intelligence

MongoDB and Partners: Building the AI Future, Together

If you’re like me, over the past year you’ve closely watched AI’s developments—and the world’s reactions to them. From infectious excitement about AI’s capabilities, to impatience with its cost and return on investment, every day has been filled with AI twists and turns. It’s been quite the roller coaster. During the ride, from time to time I’ve wondered where AI falls on the Gartner hype cycle, which gives "a view of how a technology or application will evolve over time." Have we hit the "peak of inflated expectations" only to fall into the "trough of disillusionment?" Or is the hype cycle an imperfect guide, as The Economist argues? The reality is that it takes time for any new technology—even transformative ones like AI—to take hold. And every advance, no matter how big, has had its detractors. A famous example is that of Picasso (!), who in 1968 said, “Computers are useless. They can only give you answers.” (!!) For our part, MongoDB is convinced that AI is a once-in-a-generation technology that will enhance every future application—a belief that has been reinforced by the incredible work our partners have shared at MongoDB’s 2024 events. Speeding AI development MongoDB is committed to helping organizations of all sizes succeed with AI, and one way we’re doing that is by collaborating with the MongoDB partner ecosystem to create powerful, user-friendly AI development tools and solutions. For example, Fireworks.ai —which is a member of the MongoDB AI Applications Program ecosystem —created an inference solution that hosts gen AI models and supports containerized deployments. This tool makes it easier for developers to build and deploy powerful applications with a range of easy-to-use tools and customization options. They can choose to use state-of-the-art, open-source language, image, and multimodal foundation models off the shelf, or they can customize and fine-tune models to their needs. Jointly, Fireworks.ai and MongoDB provide a solution for developers who want to leverage highly curated and optimized open-source models and combine these with their organization’s own proprietary data—and to do so with unparalleled speed and security. “MongoDB is one of the most sophisticated database providers, and it’s very easy to use,” said Benny Chen , cofounder of Fireworks.ai. "We want developers to be able to use these tools, and we want to work with providers who enable and empower developers." Nomic , another MAAP ecosystem member, also enables developers with best-in-class solutions across the entire unstructured data workflow. Their Embed offering, available through the Nomic API , allows users to vectorize large-scale datasets for use in text, image, and multimodal retrieval applications, including retrieval-augmented generation (RAG), using only their web browser. The Nomic-MongoDB solution is a highly efficient, open-weight model that developers can use to visualize the unstructured datasets they store in MongoDB Atlas . These insights help users quickly discover trends and articulate data-driven value propositions. Nomic also supported the recently announced vector quantization in MongoDB Atlas Vector Search , which reduces vector sizes while preserving performance. Last—but hardly least!—there’s our new reference architecture with MAAP partners AWS and Anthropic. Announced at MongoDB.local London , the reference architecture supports building memory-enhanced AI agents, and is designed to streamline complex processes and develop smarter, more responsive applications. For more—including a link to the code on Github— check out the MongoDB Developer Center . Making AI work for anyone and everyone The companies MongoDB partners with aren’t just making gen AI easier for developers—they’re building tools for everyone. For example, Capgemini has invested $2 billion in gen AI and is training 100,000 of its employees in the technology. GenYoda, a solution that helps insurance professionals with their daily work, is a product of this investment. GenYoda leverages MongoDB Atlas Vector Search to analyze large amounts of customer data, like policy statements, premiums, claims history, and health information. Using GenYoda, insurance professionals can quickly analyze underwriters’ reports to make informed decisions, create longitudinal health summaries, and streamline customer interactions to improve contact center efficiency. GenYoda can ingest 100,000 documents in just a few hours and respond to users’ queries in two to three seconds—a metric on par with the most widely used gen AI models. And it produces results: in one example, by using Capgemini’s solution an insurer was able to increase productivity by 15%, add new reports 25% faster (thus speeding decision-making), and reduce the manual effort of searching PDFs, increasing efficiency by 10%. Building the future of AI together So, what’s next? Honestly, I’m as curious as you are. But I’m also incredibly excited. At MongoDB, we’re active participants in the AI revolution, working to embrace the possibilities that lie ahead. The future of gen AI is bright, and I can’t wait to see what we’ll build together. To learn more about how MongoDB can accelerate your AI journey, explore the MongoDB AI Applications Program .

November 4, 2024
Artificial Intelligence

Reflections On Our Recent AI "Think-A-Thon"

Interesting ideas are bound to emerge when great minds come together, so there was no shortage of interesting ideas on October 2nd, when MongoDB’s Developer Relations team hosted our second-ever AI Build Together event at MongoDB.local London. In some ways, the event is similar to a hackathon: a group of developers come together to solve a problem. But in other ways, the event is quite different. While hackathons normally take an entire day and involve intensive coding, the AI Build Together events are organized to take place over just a few hours and don't involve any coding at all. Instead, it’s all based around discussion and ideation. For these reasons, MongoDB’s Developer Relations team likes to dub them “think-a-thons.” Our first AI Build Together event was held earlier this year at .local NYC. After seeing the energy in the room and the excitement from attendees, our Developer Relations team knew it wanted to host another one. The .local London event’s fifty attendees—which included developers from numerous industries and leading AI innovators who served as mentors—came together to brainstorm and discuss AI-based solutions to common industry problems. .local London AI Build Together attendees brainstorming AI solutions for the healthcare industry The AI mentors included: Loghman Zadeh (gravity9), Ben Gutkovich (Superlinked), Jesse Martin (Hasura), Marlene Mhangami (Microsoft), Igor Alekseev (AWS), and John Willis and Patrick Debois (co-founders of DevOps). Upon arrival, participants joined a workflow group best aligned with their industry and/or area of interest—AI for Education, AI for DevOps, AI for Healthcare, AI for Optimizing Travel, AI for Supply Chain, and AI for Productivity. The AI for Productivity group collaborating on their workflow The discussions were lively, and it was amazing to see how much energy these attendees brought to their discussions. For example, the AI for Education workflow group vigorously discussed developing a personalized AI education coach to help students develop their educational plans and support them with career advice. Meanwhile, the AI for Healthcare workflow group focused on the idea of creating an AI drive tool to provide personalized healthcare to patients and real-time insights to their providers. The AI for Productivity team came up with a clever product that helps you read, digest, and identify the key aspects of long legal documents. The AI for Optimizing Travel group seeking advice from AI mentor Marlene A talented artist was also brought in to visualize each workflow group’s problem statements and potential solutions—literally and figuratively illustrating their innovative ideas. Graphic recorder Maria Foulquié putting the final touches on the illustration Final illustration documenting the 2024 MongoDB.local London AI Build Together event All in all, our second time hosting this event was deemed a success by everyone involved. “It was impressive to see how attendees, regardless of their technical background, found ways to contribute to complex AI solutions,” says Loghman Zadeh, AI Director at gravity9, who served as one of the event’s advisors. “Engaging with so many creative and forward-thinking individuals, all eager to push the boundaries of AI innovation was refreshing. The collaborative atmosphere fostered dynamic discussions and allowed participants to explore new ideas in a supportive environment.” If you’re interested in taking part in events like these—which offer a range of networking opportunities—there are three more MongoDB.local events slated for 2024—Sao Paulo, Paris, and Stockholm. Additionally, you can join your local MongoDB user group to learn from and connect with other MongoDB developers in your area.

October 23, 2024
Artificial Intelligence

Gamuda Puts AI in Construction with MongoDB Atlas

Gamuda Berhad is a leading Malaysian engineering and construction company with operations across the world, including in Australia, Taiwan, Singapore, Vietnam, the United Kingdom, and more. The company is known for its innovative approach to construction through the use of cutting-edge technology. Speaking at MongoDB.local Kuala Lumpur in August 2024 , John Lim, Chief Digital Officer at Gamuda said: “In the construction industry, AI is increasingly being used to analyze vast amounts of data, from sensor readings on construction equipment to environmental data that impacts project timelines.” One of Gamuda’s priorities is determining how AI and other tools can impact the company’s methods for building large projects across the world. For that, the Gamuda team needed the right infrastructure, with a database equipped to handle the demands of modern AI-driven applications. MongoDB Atlas fulfilled all the requirements and enabled Gamuda to deliver on its AI-driven goals. Why Gamuda chose MongoDB Atlas “Before MongoDB, we were dealing with a lot of different databases and we were struggling to do even simple things such as full-text search,” said Lim. “How can we have a tool that's developer-friendly, helps us scale across the world, and at the same time helps us to build really cool AI use cases, where we're not thinking about the infrastructure or worrying too much about how things work but are able to just focus on the use case?” After some initial conversations with MongoDB, Lim’s team saw that MongoDB Atlas could help it streamline its technology stack, which was becoming very complex and time consuming to manage. MongoDB Atlas provided the optimal balance between ease of use and powerful functionality, enabling the company to focus on innovation rather than database administration. “I think the advantage that we see is really the speed to market. We are able to build something quickly. We are fast to meet the requirements to push something out,” said Lim. Chi Keen Tan, Senior Software Engineer at Gamuda, added, “The team was able to use a lot of developer tools like MongoDB Compass , and we were quite amazed by what we can do. This [ability to search the items within the database easily] is just something that’s missing from other technologies.” Being able to operate MongoDB on Google Cloud was also a key selling point for Gamuda: “We were able to start on MongoDB without any friction of having to deal with a lot of contractual problems and billing and setting all of that up,” said Lim. How MongoDB is powering more AI use cases Gamuda uses MongoDB Atlas and functionalities such as Atlas Search and Vector Search to bring a number of AI use cases to life. This includes work implemented on Gamuda’s Bot Unify platform, which Gamuda built in-house using MongoDB Atlas as the database. By using documents stored in SharePoint and other systems, this platform helps users write tenders quicker, find out about employee benefits more easily, or discover ways to improve design briefs. “It’s quite incredible. We have about 87 different bots now that people across the company have developed,” Lim said. Additionally, the team has developed Gamuda Digital Operating System (GDOS), which can optimize various aspects of construction, such as predictive maintenance, resource allocation, and quality control. MongoDB’s ability to handle large volumes of data in real-time is crucial for these applications, enabling Gamuda to make data-driven decisions that improve efficiency and reduce costs. Specifically, MongoDB Atlas Vector Search enables Gamuda’s AI models to quickly and accurately retrieve relevant data, improving the speed and accuracy of decision-making. It also helps the Gamuda team find patterns and correlations in the data that might otherwise go unnoticed. Gamuda’s journey with MongoDB Atlas is just beginning as the company continues to explore new ways to integrate technology into its operations and expand to other markets. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page.

October 22, 2024
Artificial Intelligence

Built With MongoDB: Buzzy Makes AI Application Development More Accessible

AI adoption rates are sky-high and showing no signs of slowing down. One of the driving forces behind this explosive growth is the increasing popularity of low- and no-code development tools that make this transformative technology more accessible to tech novices. Buzzy , an AI-powered no-code platform that aims to revolutionize how applications are created, is one such company. Buzzy enables anyone to transform an idea into a fully functional, scalable web or mobile application in minutes. Buzzy developers use the platform for a wide range of use cases, from a stock portfolio tracker to an AI t-shirt store. The only way the platform could support such diverse applications is by being built upon a uniquely versatile data architecture. So it’s no surprise that the company chose MongoDB Atlas as its underlying database. Creating the buzz Buzzy’s mission is simple but powerful: to democratize the creation of applications by making the process accessible to everyone, regardless of technical expertise. Founder Adam Ginsburg—a self-described husband, father, surfer, geek, and serial entrepreneur—spent years building solutions for other businesses. After building and selling an application that eventually became the IBM Web Content Manager, he created a platform allowing anyone to build custom applications quickly and easily. Buzzy initially focused on white-label technology for B2B applications, which global vendors brought to market. Over time, the platform evolved into something much bigger. The traditional method of developing software, as Ginsburg puts it, is dead. Ginsburg observed two major trends that contributed to this shift: the rise of artificial intelligence (AI) and the design-centric approach to product development exemplified by tools like Figma. Buzzy set out to address two major problems. First, traditional software development is often slow and costly. Small-to-medium-sized business (SMB) projects can take anywhere from $50,000 to $250,000 and nine months to complete. Due to these high costs and lengthy timelines, many projects either fail to start or run out of resources before they’re finished. The second issue is that while AI has revolutionized many aspects of development, it isn’t a cure-all for generating vast amounts of code. Generating tens of thousands of lines of code using AI is not only unreliable but also lacks the security and robustness that enterprise applications demand. Additionally, the code generated by AI often can’t be maintained or supported effectively by IT teams. This is where Buzzy found a way to harness AI effectively, using it in a co-pilot mode to create maintainable, scalable applications. Buzzy’s original vision was focused on improving communication and collaboration through custom applications. Over time, the platform’s mission shifted toward no-code development, recognizing that these custom apps were key drivers of collaboration and business effectiveness. The Buzzy UX is highly streamlined so even non-technical users can leverage the power of AI in their apps. Initially, Buzzy's offerings were somewhat rudimentary, producing functional but unpolished B2B apps. However, the platform soon evolved. Instead of building their own user experience (UX) and user interface (UI) capabilities, Buzzy integrated with Figma, giving users access to the design-centric workflow they were already familiar with. The advent of large language models (LLMs) provided another boost to the platform, enabling Buzzy to accelerate AI-powered development. What sets Buzzy apart is its unique approach to building applications. Unlike traditional development, where code and application logic are often intertwined, Buzzy separates the "app definition" from the "core code." This distinction allows for significant benefits, including scalability, maintainability, and better integration with AI. Instead of handing massive chunks of code to an AI system—which can result in errors and inefficiencies—Buzzy gives the AI a concise, consumable description of the application, making it easier to work with. Meanwhile, the core code, written and maintained by humans, remains robust, secure, and high-performing. This approach not only simplifies AI integration but also ensures that updates made to Buzzy’s core code benefit all customers simultaneously, an efficiency that few traditional development teams can achieve. Flexible platform, fruitful partnership The partnership between Buzzy and MongoDB has been crucial to Buzzy’s success. MongoDB’s Atlas developer data platform provides a scalable, cost-effective solution that supports Buzzy’s technical needs across various applications. One of the standout features of MongoDB Atlas is its flexibility and scalability, which allows Buzzy to customize schemas to suit the diverse range of applications the platform supports. Additionally, MongoDB’s support—particularly with new features like Atlas Vector Search —has allowed Buzzy to grow and adapt without complicating its architecture. In terms of technology, Buzzy’s stack is built for flexibility and performance. The platform uses Kubernetes and Docker running on Node.js with MongoDB as the database. Native clients are powered by React Native, using SQLLite and Websockets for communication with the server. On the AI side, Buzzy leverages several models, with OpenAI as the primary engine for fine-tuning its AI capabilities. Thanks to the MongoDB for Startups program , Buzzy has received critical support, including Atlas credits, consulting, and technical guidance, helping the startup continue to grow and scale. With the continued support of MongoDB and an innovative approach to no-code development, Buzzy is well-positioned to remain at the forefront of the AI-driven application development revolution. A Buzzy future Buzzy embodies the spirit of innovation in its own software development lifecycle (SDLC). The company is about to release two game-changing features that are going to take AI driven App development to the next level: Buzzy FlexiBuild, which will allow users to build more complex applications using just AI prompts, and Buzzy Automarkup, which will allow Figma users to easily mark up screens, views, lists, forms, and actions with AI in minutes. Ready to start bringing your own app visions to life? Try Buzzy and start building your application in minutes for Free. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start guide .

October 18, 2024
Artificial Intelligence

Ready to get Started with MongoDB Atlas?

Start Free