earlytalentrecruiting

2973 results

AI-Powered Call Centers: A New Era of Customer Service

Customer satisfaction is critical for insurance companies. Studies have shown that companies with superior customer experiences consistently outperform their peers. In fact, McKinsey found that life and property/casualty insurers with superior customer experiences saw a significant 20% and 65% increase in Total Shareholder Return , respectively, over five years. A satisfied customer is a loyal customer. They are 80% more likely to renew their policies, directly contributing to sustainable growth. However, one major challenge faced by many insurance companies is the inefficiency of their call centers. Agents often struggle to quickly locate and deliver accurate information to customers, leading to frustration and dissatisfaction. This article explores how Dataworkz and MongoDB can transform call center operations. By converting call recordings into searchable vectors (numerical representations of data points in a multi-dimensional space), businesses can quickly access relevant information and improve customer service. We'll dig into how the integration of Amazon Transcribe, Cohere, and MongoDB Atlas Vector Search—as well as Dataworkz's RAG-as-a-service platform— is achieving this transformation. From call recordings to vectors: A data-driven approach Customer service interactions are goldmines of valuable insights. By analyzing call recordings, we can identify successful resolution strategies and uncover frequently asked questions. In turn, by making this information—which is often buried in audio files— accessible to agents, they can give customers faster and more accurate assistance. However, the vast volume and unstructured nature of these audio files make it challenging to extract actionable information efficiently. To address this challenge, we propose a pipeline that leverages AI and analytics to transform raw audio recordings into vectors as shown in Figure 1: Storage of raw audio files: Past call recordings are stored in their original audio format Processing of the audio files with AI and analytics services (such as Amazon Transcribe Call Analytics ): speech-to-text conversion, summarization of content, and vectorization Storage of vectors and metadata: The generated vectors and associated metadata (e.g., call timestamps, agent information) are stored in an operational data store Figure 1: Customer service call insight extraction and vectorization flow Once the data is stored in vector format within the operational data store, it becomes accessible for real-time applications. This data can be consumed directly through vector search or integrated into a retrieval-augmented generation (RAG) architecture, a technique that combines the capabilities of large language models (LLMs) with external knowledge sources to generate more accurate and informative outputs. Introducing Dataworkz: Simplifying RAG implementation Building RAG pipelines can be cumbersome and time-consuming for developers who must learn yet another stack of technologies. Especially in this initial phase, where companies want to experiment and move fast, it is essential to leverage tools that allow us to abstract complexity and don’t require deep knowledge of each component in order to experiment with and realize the benefits of RAG quickly. Dataworkz offers a powerful and composable RAG-as-a-service platform that streamlines the process of building RAG applications for enterprises. To operationalize RAG effectively, organizations need to master five key capabilities: ETL for LLMs: Dataworkz connects with diverse data sources and formats, transforming the data to make it ready for consumption by generative AI applications. Indexing: The platform breaks down data into smaller chunks and creates embeddings that capture semantics, storing them in a vector database. Retrieval: Dataworkz ensures the retrieval of accurate information in response to user queries, a critical part of the RAG process. Synthesis: The retrieved information is then used to build the context for a foundational model, generating responses grounded in reality. Monitoring: With many moving parts in the RAG system, Dataworkz provides robust monitoring capabilities essential for production use cases. Dataworkz's intuitive point-and-click interface (as seen in Video 1) simplifies RAG implementation, allowing enterprises to quickly operationalize AI applications. The platform offers flexibility and choice in data connectors, embedding models, vector stores, and language models. Additionally, tools like A/B testing ensure the quality and reliability of generated responses. This combination of ease of use, optionality, and quality assurance is a key tenet of Dataworkz's "RAG as a Service" offering. Diving deeper: System architecture and functionalities Now that we’ve looked at the components of the pre-processing pipeline, let’s explore the proposed real-time system architecture in detail. It comprises the following modules and functions (see Figure 2): Amazon Transcribe , which receives the audio coming from the customer’s phone and converts it into text. Cohere ’s embedding model, served through Amazon Bedrock , vectorizes the text coming from Transcribe. MongoDB Atlas Vector Search receives the query vector and returns a document that contains the most semantically similar FAQ in the database. Figure 2: System architecture and modules Here are a couple of FAQs we used for the demo: Q: “Can you explain the different types of coverage available for my home insurance?” A: “Home insurance typically includes coverage for the structure of your home, your personal belongings, liability protection, and additional living expenses in case you need to temporarily relocate. I can provide more detailed information on each type if you'd like.” Q: “What is the process for adding a new driver to my auto insurance policy?" A: “To add a new driver to your auto insurance policy, I'll need some details about the driver, such as their name, date of birth, and driver's license number. We can add them to your policy over the phone, or you can do it through our online portal.” Note that the question is reported just for reference, and it’s not used for retrieval. The actual question is provided by the user through the voice interface and then matched in real-time with the answers in the database using Vector Search. This information is finally presented to the customer service operator in text form (see Fig. 3). The proposed architecture is simple but very powerful, easy to implement, and effective. Moreover, it can serve as a foundation for more advanced use cases that require complex interactions, such as agentic workflows , and iterative and multi-step processes that combine LLMs and hybrid search to complete sophisticated tasks. Figure 3: App interface, displaying what has been asked by the customer (left) and how the information is presented to the customer service operator (right) This solution not only impacts human operator workflows but can also underpin chatbots and voicebots, enabling them to provide more relevant and contextual customer responses. Building a better future for customer service By seamlessly integrating analytical and operational data streams, insurance companies can significantly enhance both operational efficiency and customer satisfaction. Our system empowers businesses to optimize staffing, accelerate inquiry resolution, and deliver superior customer service through data-driven, real-time insights. To embark on your own customer service transformation, explore our GitHub repository and take advantage of the Dataworkz free tier .

November 27, 2024

Better Digital Banking Experiences with AI and MongoDB

Interactive banking represents a new era in financial services where customers engage with digital platforms that anticipate, understand, and meet their needs in real-time. This approach encompasses AI-driven technologies such as chatbots, virtual assistants, and predictive analytics that allow banks to enhance digital self-service while delivering personalized, context-aware interactions. According to Accenture’s 2023 consumer banking study , 44% of consumers aged 18-44 reported difficulty accessing human support when needed, underscoring the demand for more responsive digital solutions that help bridge this gap between customers and financial services. Generative AI technologies like chatbots and virtual assistants can fill this need by instantly addressing inquiries, providing tailored financial advice, and anticipating future needs. This shift has tremendous growth potential; the global chatbot market is expected to grow at a CAGR of 23.3% from 2023 to 2030 , with the financial sector experiencing the fastest growth rate of 24.0%. This shift is more than just a convenience; it aims to create a smarter, more engaging, and intuitive banking journey for every user. Simplifying self-service banking with AI Navigating daily banking activities like transfers, payments, and withdrawals can often raise immediate questions for customers: “Can I overdraft my account?” “What will the penalties be?” or “How can I avoid these fees?” While the answers usually lie within the bank’s terms and conditions, these documents are often dense, complex, and overwhelming for the average user. At the same time, customers value their independence and want to handle their banking needs through self-service channels, but wading through extensive fine print isn't what they signed up for. By integrating AI-driven advisors into the digital banking experience, banks can provide a seamless, in-app solution that delivers instant, relevant answers. This removes the need for customers to leave the app to sift through pages of bank documentation in search of answers, or worse, endure the inconvenience of calling customer service. The result is a smoother and user-friendly interaction, where customers feel supported in their self-service journey, free from the frustration of navigating traditional, cumbersome information sources. The entire experience remains within the application, enhancing convenience and efficiency. Solution overview This AI-driven solution enhances the self-service experience in digital banking by applying Retrieval-Augmented Generation (RAG) principles, which combine the power of generative AI with reliable information retrieval, ensuring that the chatbot provides accurate, contextually relevant responses. The approach begins by processing dense, text-heavy documents, like terms and conditions, often the source of customer inquiries. These documents are divided into smaller, manageable chunks vectorized to create searchable data representations. Storing these vectorized chunks in MongoDB Atlas allows for efficient querying using MongoDB Atlas Vector Search , making it possible to instantly retrieve relevant information based on the customer’s question. Figure 1: Detailed solution architecture When a customer inputs a question in the banking app, the system quickly identifies and retrieves the most relevant chunks using semantic search. The AI then uses this information to generate clear, contextually relevant answers within the app, enabling a smooth, frustration-free experience without requiring customers to sift through dense documents or contact support. Figure 2: Leafy Bank mock-up chatbot in action How MongoDB supports AI-driven banking solutions MongoDB offers unique capabilities that empower financial institutions to build and scale AI-driven applications. Unified data model for flexibility: MongoDB’s flexible document model unifies structured and unstructured data, creating a consistent dataset that enhances the AI’s ability to understand and respond to complex queries. This model enables financial institutions to store and manage customer data, transaction history, and document content within a single system, streamlining interactions and making AI responses more contextually relevant. Vector search for enhanced querying: MongoDB Atlas Vector Search makes it easy to perform semantic searches on vectorized document chunks, quickly retrieving the most relevant information to answer user questions. This capability allows the AI to find precise answers within dense documents, enhancing the self-service experience for customers. Scalable integration with AI models: MongoDB is designed to work seamlessly with leading AI frameworks, allowing banks to integrate and scale AI applications quickly and efficiently. By aligning MongoDB Atlas with cloud-based LLM providers, banks can use the best tools available to interpret and respond to customer queries accurately, meeting demand with responsive, real-time answers. High performance and cost efficiency: MongoDB’s multi-cloud, developer-friendly platform allows financial institutions to innovate without costly infrastructure changes. It’s built to scale as data and AI needs to grow, ensuring banks can continually improve the customer experience with minimal disruptions. MongoDB’s built-in scalability allows banks to expand their AI capabilities effortlessly, offering a future-proof foundation for digital banking. Building future-proof applications Implementing generative AI presents several advantages, not only for end-users of the interactive banking applications but also for financial institutions: Enhanced user experience encourages customer satisfaction, ensures retention, boosts reputation, and reduces customer turnover while unlocking new opportunities for cross-selling and up-selling to increase revenue, drive growth and elevate customer value. Moreover, adopting AI-driven initiatives prepares the groundwork for businesses to develop innovative, creative, and future-proof applications to address customer needs and upgrade business applications with features that are shaping the industry and will continue to do so, here are some examples: Summarize and categorize transactional information by powering applications with MongoDB’s Real-Time Analytics . Understand and find trends based on customer behavior that could positively impact and leverage fraud prevention , anti-money laundering (AML) , and credit card application (just to mention a few). Offering investing, budgeting, and loan assessments through AI-powered conversational banking experience. In today’s data-driven world, companies face increasing pressure to stay ahead of rapid technological advancements and ever-evolving customer demands. Now more than ever, businesses must deliver intuitive, robust, and high-performing services through their applications to remain competitive and meet user expectations. Luckily, MongoDB provides businesses with comprehensive reference architectures for building generative AI applications, an end-to-end technology stack that includes integrations with leading technology providers, professional services, and a coordinated support system through the MongoDB AI Applications Program (MAAP) . By building AI-enriched applications with the leading multi-cloud developer data platform, companies can leverage low-cost, efficient solutions through MongoDB’s flexible and scalable document model which empowers businesses to unify real-time, operational, unstructured, and AI-related data, extending and customizing their applications to seize upcoming technological opportunities. Check out these additional resources to get started on your AI journey with MongoDB: How Leading Industries are Transforming with AI and MongoDB Atlas - E-book Our Solutions Library is where you can learn about different use cases for gen AI and other interesting topics that are applied to financial services and many other industries.

November 26, 2024

网易游戏分享游戏场景中MongoDB运行和分析实践

在游戏行业中,数据库的稳定和性能直接影响了游戏质量和用户满意度。在竞争激烈的游戏市场中,一个优秀的数据库产品无疑能为游戏的开发和后期的运营奠定良好的基础。伴随着MongoDB在不同类型游戏场景中的应用越来越广泛,许多知名的游戏公司都在使用MongoDB来处理他们的游戏数据。 网易游戏自成立以来与广大游戏热爱者一同成长。经过20年的快速发展,网易已跻身全球七大游戏公司之一。作为中国领先的游戏开发公司,网易一直处于网络游戏自主研发领域的前端。目前在网易游戏中,包括手游、业务数据中心以及其他内部产品等在内的多项产品都已广泛应用MongoDB。 综合考虑游戏生态、架构、稳定性和成本等因素,网易游戏的数据库架构采用了多类型服务架构,根据游戏在不同的生命周期进行选型适配。MongoDB支持副本集和分片集群模式,可根据需求进行水平扩展,选择合适的分片策略和副本集设置,为网易游戏提供了灵活性和高可用性的帮助。 如何有计划地分析MongoDB数据的过程行为以便产品决策? 随着游戏行业的不断发展,游戏数据的规模和复杂性也在不断增加。在游戏开发过程中还需要进行大量的数据分析,用以了解玩家行为、优化游戏体验、制定营销策略等。 网易互娱数据库业务负责人郑良榉表示:“我们希望有计划地对MongoDB的过程行为数据进行体系化分析,并应用到常态化管理中,以便在与MongoDB数据库交互的过程中,能够对按需触发的业务进行实时响应。” 灵活的文档模型,让数据分析更从容 MongoDB的灵活文档模型在处理游戏中的复杂数据结构时表现尤为出色。游戏存在不同场景下大量的全量更新和增量更新行为,游戏中包括玩家信息、游戏状态、物品信息等在内的数据结构会频繁发生变化,与此同时,优化占比大的字段还可能带来工作负载的更新。MongoDB采用文档存储的方式,使用JSON格式来存储数据。每个文档可以有不同的字段和结构,这使得它非常适合处理游戏中的复杂数据。通过使用MongoDB,游戏开发人员可以随时添加或修改字段而不需要进行复杂的数据库迁移操作。 当运营需要查证玩家反馈时,比如游戏中出现装备丢失、属性不对、需要查证奖励发放情况等问题,网易游戏内部对文档历史分析提出了时效性(周期性)、可查性(可用性)、对比性(变更差异)和及时性(oplog不带索引)的要求。这是一个既要又要的选择。MongoDB使用副本集来实现数据的冗余和高可用性,而操作日志(oplog)是副本集的核心机制之一。网易游戏能够通过副本集中的oplog实现了与原始文档数据联动管理,在主节点发生故障时,可以根据oplog中的信息快速恢复数据,审计并排查问题。 多场景多应用,也能游刃有余 在一些特定的游戏场景中,历史榜单会保存所有场次的数据,周期性活动的副本战斗需要结算,单一玩家的数据玩法会不断叠加,玩家的装备属性也会越来越复杂……诸如此类的大文档数据会持续增加,从而导致批量更新慢、cache使用异常、队列等待长等问题。网易游戏通过MongoDB的定期、自动巡检,帮助产品及时发现大文档潜在问题,例如数据库负载过高、请求超时、异常错误等,从而采取相应措施,规避线上运营风险。 在游戏运营期间,索引对于服务的访问效率至关重要,无索引行为会导致实例负载变高,严重时会导致整个请求阻塞,影响游戏体验和服务稳定性。依托MongoDB,网易游戏对于人工日常的数据查询行为,在访问入口代码层面实现了实时索引检测与报警,用户可以根据提示是否继续查询行为。同时在系统层面,用户访问可按实例级别配置是否允许无索引查询,以进一步减少非预期内的误操作行为。除了使用索引外,MongoDB还可以通过查询优化来加速热点数据的查询,进一步提升QPS(每秒查询推理响应速度)。 游戏运营中会有一些突发状况,比如高峰期单一玩法异常导致请求突增、某个服务卡住导致整个实例请求阻塞等,此时分析库表级别的访问QPS(非mongostat和mongotop结果)对于找到引起问题的根因点至关重要。借助于 MongoDB 的内置功能,网易游戏实现了实时的库表 QPS 和访问延时采集以帮助产品及时发现瓶颈点。此外,客户端行为的跟踪和分析也能帮助业务发现问题来源,借助MongoDB的CurrentOP功能,网易游戏能够统计业务客户端实时会话情况,并针对当前实例节点操作进行行为检索,还能追踪某个连接操作行为。 与此同时,在数据迁移和数据分布异常的情况下,MongoDB也能有效解决阻碍技术负责人执行的最大障碍——大数据量的数据同步工作及落地后的数据管理,从而提升运维效率。 郑良榉表示,MongoDB的接入只是一个开始,如何更好地贴近业务的使用需要长期的多场景实践、探索与总结,这也是当前持续发展的方向。对MongoDB数据行为的分析能够帮助网易游戏在数据决策方面提供有效参考,从而产生更大的收益和价值。 点击注册,免费开始使用 MongoDB Atlas

November 26, 2024

Influencing Product Strategy at MongoDB with Garaudy Etienne

Garaudy Etienne joined MongoDB as a Product Manager in October of 2019. Since then, he’s experienced tremendous growth. Successful deliveries of MongoDB 4.4 features and MongoDB 5.0 sharding features helped fuel Garaudy’s career development, as did his work establishing a long-term sharding vision, mentoring others, and successfully managing interns. Now, as a Director of Product, he’s defining the strategic direction across multiple products and helping grow our product management organization and culture. Read on to learn more about Garaudy’s experience at MongoDB and his expanding team. A team with impact My team focuses on distributed systems within MongoDB's core database functions, also known as the database engine. Our team ensures the database is reliable and scalable for our most demanding customers. We ensure the product consistently performs as promised, especially at scale. MongoDB's dependability drives greater usage, which enhances our revenue and brand perception. The problems my team works on are vast and relatively undefined. These include revamping our Go-To-Market strategy for new and existing features, guiding the engineering team on architectural decisions driven by customer demands, identifying target markets, and assisting customers in challenging situations. MongoDB and AI We’re in the early stages of the AI boom. MongoDB’s document model is particularly well-suited for this era, as it excels in handling unstructured data, which makes up the majority of today’s information. As AI increasingly relies on diverse formats like text, images, and videos, our flexible schema enables efficient storage and retrieval of unstructured data, enabling applications to extract valuable insights. Our vector search capability enables fast, complex data matching and retrieval, making it ideal for AI-powered applications. This synergy between MongoDB’s document model plus Vector Search and the needs of AI-driven applications positions us as a powerful foundation for companies looking to enable AI into their workflows. The beauty of working in the core database is that it has to support every workload, including the new and expanding Vector Search applications. This means we need to ensure the database remains robust and scalable as AI demands evolve. Some examples are helping develop a more scalable architecture for Search or a new networking stack for Search. No matter what new capabilities MongoDB decides to deliver or the new markets we enter, everything must pass through the core database. This also allows you to meet lots of people and understand everything the company is doing instead of working in a silo. A rewarding career in product MongoDB is committed to career development, something I’ve experienced first-hand. The company has provided me with development opportunities through product management-specific training with Reforge, conferences, direct engagement with critical customers, and leadership training. As a product manager, I was offered mentorship and coaching with multiple experienced product leaders who provided guidance and support as I worked toward promotions. The company clearly communicates the expectations and requirements for advancement within the product management organization. Reflecting on my journey at MongoDB, I still remember the first two features I PM’d: Hedged Reads and Mirrored Reads. One of my first major highlights was presenting at the MongoDB 5.0 keynote to showcase resharding. Seeing genuine excitement from customers and internal teams about this new feature was incredibly fulfilling and reinforced its value. While the keynote was a public milestone, another personal highlight came when I finally visited one of my engineering teams in Barcelona after nearly two years of remote collaboration. This in-person time was invaluable and helped us bring the groundbreaking sharding changes for MongoDB 6.0 to the finish line. Most recently, defining the key strategic pillars for MongoDB 8.0 and allowing other product managers to take ownership of key initiatives has been more rewarding than I imagined. MongoDB’s engineering team is extremely talented, and collaborating with them always brings me tremendous joy. The most recent highlight of my career has been building a diverse product team and helping other product managers make a larger impact than they previously envisioned. Why MongoDB What keeps me at MongoDB is the opportunity to tackle significant challenges, make autonomous decisions, own multiple products, and take on greater leadership responsibilities. MongoDB also rewards and recognizes product managers who drive meaningful impact across the organization and its products. If these opportunities excite you, you'll thrive as part of MongoDB’s product management team! For my team, I’m committed to providing the right balance of guidance and autonomy. Your decisions will have a lasting impact at the executive and organizational levels, creating continuous opportunities to excel and deliver meaningful results. Plus, I always try to make the job fun. Head to our careers site to apply for a role on Garaudy’s team and join our talent community to stay in the loop on all things #LifeAtMongoDB!

November 25, 2024

Innovazione e salute: MongoDB a supporto della Piattaforma Nazionale di Telemedicina

La medicina del futuro sarà molto più “tele” e meno fisica? Potremo monitorare la nostra salute senza spostamenti e senza sale d’attesa? Scopriamolo insieme ai protagonisti italiani di questa rivoluzione, costruita su importanti investimenti e tecnologie innovative. PNT Italia è la società di progetto costituita da Engineering e Almaviva, a cui AGENAS (Agenzia Nazionale per i Servizi Sanitari Regionali) ha affidato in concessione la progettazione, realizzazione e gestione della Piattaforma Nazionale di Telemedicina, che integra i servizi regionali per migliorare la qualità e l’accesso alle cure in tutto il Paese, in linea con gli obiettivi indicati dal Piano di Ripresa e Resilienza in ambito Sanità Digitale. Formalmente, PNT Italia è una società costituita dal 60% da Engineering e dal 40% da Almaviva, due tra i più grandi Gruppi tecnologici italiani. Su concessione di AGENAS PNT Italia ha creato la Piattaforma Nazionale di Telemedicina, che gestirà per 10 anni con il compito di assicurare la governance e il monitoraggio centralizzati dei processi di telemedicina attuati a livello regionale, mettendo in comunicazione l’Amministrazione Centrale con le Amministrazioni locali. Per capire meglio il percorso che ci porterà alla telemedicina abbiamo intervistato tre protagonisti di questa iniziativa: Stefano Zema, responsabile sviluppo e delivery PNT del Gruppo Engineering, Daniele Fortuna, PNT Infrastructure Architect di Almaviva e Angelo Immediata, PNT Solution Architect di Engineering. “Il progetto, che ha un respiro decennale, è diviso in tre macro fasi” dice Stefano Zema, responsabile sviluppo e delivery PNT del Gruppo Engineering, “la prima, che è durata da maggio a dicembre 2023, è stata la progettazione e implementazione, conclusa con il collaudo della piattaforma. Ora ci aspettano due anni di avvio e consolidamento (2024-25) in cui la piattaforma dovrà adattarsi e integrare le soluzioni regionali di telemedicina, che sono anch’esse in via di sviluppo. La terza fase comprende la gestione vera e propria della piattaforma”. “Tra i team di Engineering e Almaviva”, prosegue Zema, “si è subito creato un clima coeso, favorito anche dall’affiancamento dei tecnici italiani di MongoDB e dal supporto internazionale. Gli obiettivi del progetto, aiutare la transizione digitale del Paese e consentire ai malati cronici di stare vicino ai propri familiari, erano talmente alti e sfidanti da creare un clima di entusiasmo e voglia di fare bene in tutta l’organizzazione. Le persone, il metodo, la coesione e l’entusiasmo sono stati ingredienti fondamentali per la riuscita di questa prima fase”. “L’obiettivo”, aggiunge Angelo Immediata, “è anche quello di aiutare il Governo a far capire a operatori e pazienti l’importanza della telemedicina. Per raggiungerlo ci sono tante strade. Quelle più efficaci sono la definizione di standard comuni, e l’implementazione di soluzioni di telemedicina efficaci per fare in modo che gli operatori aderiscano in modo naturale e rispettino le indicazioni previste. Per raggiungere questo scopo, PNT impiega tra le 100 e le 150 persone che risiedono in 14 diverse regioni italiane, con Engineering e Almaviva che conferiscono risorse al progetto in modo dinamico”. Un progetto complesso per un obiettivo alto La sfida che il gruppo si trova ad affrontare è articolata: nel breve periodo bisogna creare il nucleo di una piattaforma flessibile, sicura e interoperabile (per mettere in comunicazione i sistemi regionali e quello centrale). Nel medio periodo è necessario garantire stabilità, scalabilità e resilienza. Nel lungo periodo, invece, sarà fondamentale supportare gli operatori a diffondere la cultura di un’assistenza sanitaria centrata sui bisogni degli assistiti e dei loro familiari. “Abbiamo scelto”, spiega Fortuna, “di preferire un’infrastruttura cloud e un’architettura orientata agli eventi, oltre che pensare a una piattaforma aperta, che evitasse i lock-in e che permettesse a PNT di trarre il maggior vantaggio dal cloud pubblico”. “Abbiamo scelto MongoDB Atlas come database per diversi motivi”, dice Angelo Immediata, PNT Solution Architect di Engineering, “primo tra tutti la tipologia di dati da trattare, in standard HL7 FHIR (Fast Healthcare Interoperability Resources), perfettamente gestibili da MongoDB, poi la natura multicloud della tecnologia, la gestione della sicurezza e la scalabilità orizzontale per trattare grandi moli di dati.” I vantaggi di un ambiente aperto e scalabile PNT sceglie di realizzare una piattaforma orientata agli eventi, semplice da espandere e adattare, scalabile e flessibile. “Tutto lo stack tecnologico”, spiega Immediata, “è basato sui framework Java Spring per il back-end e Angular per il front-end. Nei flussi di dati dalle infrastrutture regionali al sistema centrale, le informazioni vengono anonimizzate ma non perdono il loro valore, assicurando così sicurezza e privacy, con l’obiettivo di una vulnerabilità pari a zero”. “La piattaforma è tutta in cloud su infrastrutture AWS”, dice Fortuna, “ma è stata progettata per non avere nessun lock-in. Siamo convintamente cloud native, ma non abbiamo voluto usare servizi nativi del provider; questo è stato uno dei motivi per cui abbiamo scelto MongoDB Atlas, che vive in SaaS e che è per definizione multi-ambiente”. Tra i servizi associati a MongoDB già utilizzati da PNT c’è Online Archive, che permette di sfruttare in modo intelligente gli spazi di archiviazione dei Terabyte di dati provenienti dalle strutture sanitarie, favorendo la sostenibilità economica e ambientale del progetto, ed Encryption at Rest, che proteggendo i dati a livello di DB consente di abbattere la complessità e favorire la compliance. Nel mirino per il prossimo futuro c’è poi la tecnologia Vector Search, associata all’adozione dell’AI Generativa. “Conclusa la prima fase”, dice Fortuna, “ora dedicheremo le nostre energie alla verifica e alla validazione delle piattaforme regionali” e, come rinforza Immediata, “Partiamo dal dato incoraggiante che con MongoDB l’effort per creare le istanze è circa la metà di quello necessario con altre tecnologie e che i tempi si calcolano in ore invece che in giorni.” Il team è pronto alle imminenti sfide, e a sfruttare la flessibilità del database considerando che alla fine del 2025 ci saranno oltre 300mila pazienti assistiti in telemedicina contemporaneamente. In un progetto lungo 10 anni, nessuno può sapere come evolverà la tecnologia; per questo è importante aver scelto un partner che ci permette di scegliere sempre le soluzioni migliori. MongoDB ci fa dormire tranquilli. È una società giovane, dinamica ma nello stesso tempo leader di mercato. Stefano Zema, responsabile sviluppo e delivery PNT, Gruppo Engineering Guarda come gestire in sicurezza i dati sanitari con MongoDB Atlas .

November 25, 2024

Customer Service Expert Wati.io Scales Up on MongoDB

Wati.io is a software-as-a-service (SaaS) platform that empowers businesses to develop conversation-driven strategies to boost growth. Founded by CEO Ken Yeung in 2019, Wati started as a chatbot solution for large enterprises, such as banks and insurance companies. However, over time, Yeung and his team noticed a growing need among small and medium-sized businesses (SMBs) to manage customer conversations more effectively. To address this need, Wati used MongoDB Atlas and built a solution based on the WhatsApp Business API. It enables businesses to manage and personalize conversations with customers, automate responses, improve commerce functions, and enhance customer engagement. Speaking at MongoDB.local Hong Kong in September 2024, Yeung said, “The current solutions on the market today are not good enough. Especially for SMBs [that] don’t have the same level of resources as enterprises to deal with the number of conversations and messages that need to be handled every day.” Supporting scale: From MongoDB Community Edition to MongoDB Atlas “From the beginning, we relied on MongoDB to handle high volumes of messaging data and enable businesses to manage and scale their customer interactions efficiently,” said Yeung. Wati originally used MongoDB Community Edition , as the company saw the benefits of a NoSQL model from the beginning. As the company grew, it realized it needed a scalable infrastructure, so Wati transitioned to MongoDB Atlas. “When we started reaching the 2 billion record threshold, we started having some issues. Our system slowed down, and we were not able to scale it,” said Yeung. Atlas has now become an essential part of Wati’s infrastructure, helping the company store and process millions of messages each month for over 10,000 customers in 165 countries. “Transitioning to a new platform—MongoDB Atlas—seamlessly was critical because our messaging system needs to be on 24/7,” said Yeung. Wati collaborated closely with the MongoDB Professional Services and MongoDB Support teams, and in a few months it was able to rearchitect the deployment and data model for future growth and demand. The work included optimizing Wati’s database by breaking it down into clusters. Wati then focused on extracting connections, such as conversations, and dividing and categorizing data within the clusters—for example, qualifying data as cold or hot based on the read and write frequencies. This architecture underpins the platform’s core features, including automated customer engagement, lead qualification, and sales management. Deepening search capabilities with MongoDB Atlas Search For Wati’s customers, the ability to search through conversation histories and company documents to retrieve valuable information is a key function. This often requires searching through millions of records to rapidly find answers so that they can respond to customers in real-time. By using MongoDB Atlas Search , Wati improved its search capabilities, ultimately helping its business customers perform more advanced analytics and improve their customer service agents’ efficiency and customer reporting. “[MongoDB] Atlas Search is really helpful because we don’t have to do a lot of technical integration, and minimal programming is required,” said Yeung. Looking ahead: Using AI and integrating more channels Wati expects to continue collaborating with MongoDB to add more features to its platform and keep innovating at speed. The company is currently exploring to build more AI capabilities of Wati KnowBot , as well as how it can expand its integration with other conversation platforms and channels such as Instagram and Facebook. To learn more about MongoDB Atlas, visit our product page . To get started with MongoDB Atlas Search, visit the Atlas Search product page .

November 25, 2024

Hanabi Technologies Uses MongoDB to Power AI Assistant, Hana

For all the hype surrounding generative AI, cynics tend to view the few real-world implementations as little more than “fancy chatbots.” But for Abhinav Aggarwal, CEO of Hanabi Technologies, the idea of a generative AI-powered bot that is more than just an assistant was intriguing. “I’d been using ChatGPT since it launched,” said Aggarwal. “That got me thinking: How could we make a chatbot that was like a team member?” And with that concept, Hana was born. The problem with bots “Most generative AI chatbots do not act like people; they wait for a command and give a response,” said Aggarwal. “We wanted to create a human-like chatbot that would proactively help people based on what they wanted—automating reminders, for example, or fetching time zones from your calendar to correctly schedule meetings.” Hanabi’s flagship product, Hana, is an AI assistant designed to enhance team collaboration within Google Chat, working in concert with Google Workspace and its suite of products. “Our target customers are smaller companies of between 10 and 50 people. At this size you’re not going to build your own agent from scratch,” he said. Hana integrates with Google APIs to deliver a human-like assistant that chimes in with helpful interventions, such as automatically setting reminders and making sure meetings are booked in the right time zone for each participant. “Hana is designed to bring AI to smaller companies and help them collaborate in a space where they are already working—Google Workspace,” Aggarwal explained. The MongoDB Atlas solution For Hana to act like a member of the team, Hanabi needed to process massive amounts of data to support advanced features like retrieval-augmented generation (RAG) for better information retrieval across Google Docs and many other sources. And with a rapidly growing user base of over 600 organizations and 17,000+ installs, Hanabi also required a secure, scalable, and high-performing data storage solution. MongoDB Atlas provided a flexible document model, built-in vector database, and scalable cloud-based infrastructure, freeing Hanabi engineers to build new features for Hana rather than focusing on rote tasks like data extract, transform, and load processes or manual scaling and provisioning. Now, MongoDB Atlas handles a variety of responsibilities: Scalability and security: MongoDB Atlas’s auto-scaling and automatic backup features have enabled Hanabi to seamlessly grow its user base without the need for manual database management. RAG: MongoDB Atlas plays a critical role in Hana’s RAG functionality. The platform enables Hanabi to split Google Docs into small sections, create embeddings, and store these sections in Atlas’s vector database. Development Processes: According to Aggarwal, MongoDB’s flexibility in managing changing schemas has been essential to the company’s fast-paced development cycle. Data Visualization: Using MongoDB Atlas Charts has enabled Hanabi to create comprehensive dashboards for real-time data visualization. This has helped the team track usage, set reminders, and optimize performance without needing to build a manual dashboard. Impact and results With MongoDB Atlas, Hanabi can successfully scale Hana to meet the demands of its rapidly expanding user base. The integration is also enabling Hana to offer powerful features like automatic interactions with customers, advanced information retrieval from Google Docs, and manually added memory snippets, making it an essential tool for teams around the world. Next steps Hanabi plans to continue integrating more tools into Hana while expanding its reach to personal Gmail users. The company is also rolling out a new automatic-interaction feature, further enhancing Hana’s ability to proactively assist users without direct commands. MongoDB Atlas remains a key component of Hanabi’s stack, alongside Google Kubernetes Engine, NestJS, and LangChain, enabling Hanabi to focus on innovating to improve the customer experience. Tech Stack MongoDB Atlas Google Kubernetes Engine NestJS LangChain Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free MongoDB Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com.

November 21, 2024

Staff Engineering at MongoDB: Your Path to Making Broad Impact

Andrew Whitaker is a Senior Staff Engineer at MongoDB. His previous experience spans tiny startups to enormous organizations like AWS, where he held several different roles focusing on databases. Before joining MongoDB, he worked at a startup building optimized machine learning models in the cloud. Read on to learn more about why Andrew decided to join MongoDB in a senior-level engineering role and how his work is driving improvement within our engineering organization. Why MongoDB I have long been a fan of MongoDB’s products and services. MongoDB the database has always been a pleasure to work with – the system “brings joy” to quote a phrase. As a Python developer, I appreciate how the Python driver feels “Pythonic” in a completely natural way. The programmer interacts with the database using Python constructs: dictionaries, lists, and primitive types. By contrast, SQL databases force me to change my mental model, and the query language feels like an add-on that does not blend with the core language. As an engineer, I am always looking to expand my knowledge and grow my skills. The scope of challenges engineers face at MongoDB is what triggered my interest in the company. We obviously have people working on core databases and distributed systems. But, we also have teams dedicated to machine learning, streaming data, analytics, networking, developer tooling, drivers, and many more areas. It is very hard to get bored working at MongoDB. Finally, I would be remiss if I did not mention the people. Overall, MongoDB’s engineering culture prioritizes intelligence, low ego, and an ability to get stuff done. CL/CI (Continuous Learning, Continuous Improvement) Working at MongoDB has provided me with opportunities for continued learning and growth. Though I do not program as much as I did earlier in my career, I have recently been exploring the Rust language. I’m excited by Rust because it avoids the tradeoffs between predictable performance and safety. My work in the search space has given me exposure to the fast moving world of AI: vector embeddings, RAG, etc. For various reasons, I think MongoDB is uniquely positioned to do well in this area. On top of this, I’m working on some initiatives that are not fully public. I can say that one focus area is improving the sharding experience for our customers. We believe MongoDB sharding is best-in-breed. Still, the process requires more manual configuration than we think is ideal: customers select the shard key, cluster type, shard count, etc. We give guidance here, but I think we can raise the bar in terms of offering a seamless experience with less “futz”. I’m also working with the search team. We believe there is a natural affinity between MongoDB’s document model and AI/ML workloads. We have some features in the works that extend this integration in new and interesting ways. I also spend a fair bit of time driving quality improvements across our suite of products. Our CTO Jim Scharf frequently refers to our “ big 4 ” goals: security, durability, availability, and performance. These goals are more important than any feature we build. I’ve been working across the company to help teams define their availability SLO/SLAs. It turns out that measuring availability is a subtle topic. For example, a naive approach of counting the percentage of failed requests can underestimate downtime because customers make fewer requests when a service is unavailable. So, the first step is to clarify the definition of availability. Finally, as a lapsed academic (in a distant life, I was a graduate student at the University of Washington Department of Computer Science and Engineering), I’m always interested in finding ways to bridge theory and practice. I’ve been collaborating with some folks in our research team to drive improvements to our replication protocols. There are theoretical results that suggest it is impossible to simultaneously achieve low latency and strong consistency (“linearizability” in the technical jargon). However, we believe there are intermediate points in the consistency/latency spectrum that have not been fully explored. This work hasn't been made into a product yet, but stay tuned. Flexible working MongoDB is a hybrid company. Like many of our engineers, I work outside the company headquarters in New York City (I live in Seattle). I appreciate MongoDB’s approach to hybrid working and that company leadership, starting with Dev , cares about the well-being of their employees. It seems there are companies that don’t seem to trust their employees to make decisions, such as which days to come into the office, so I’m thankful for the autonomy I receive at MongoDB to work in a way that’s best for me. Remote work has its challenges, but I would say that the benefit for my work/life balance has been transformative. Final thoughts I have found MongoDB engineers demonstrate a strong mix of technical depth, pragmatism, and empathy. I have yet to find the “smart jerk” prototype that seems to exist throughout the tech industry. Overall, I have found MongoDB is open to change and growth at both the team level and the individual level. There is a willingness to evolve and improve that aligns with the company’s values and leadership principles and enables the success of our technology and people. Find out more about MongoDB culture and career opportunities by joining our talent community .

November 20, 2024

3 Ways MongoDB EA Azure Arc Certification Serves Customers

One reason more than 50,000 customers across industries choose MongoDB is the freedom to run anywhere—across major cloud providers, on-premises in data centers, and in hybrid deployments. This is why MongoDB is always working to meet customers where they are. For example, many customers choose MongoDB Atlas (which is available in more than 115 cloud regions across major cloud providers) for a fully managed experience. Other customers choose MongoDB Enterprise Advanced (EA) to self-manage their database deployments to meet specific on-premises or hybrid requirements. To that end, we’re pleased to announce that MongoDB EA is one of the first certified Microsoft Azure Arc-enabled Kubernetes applications, which provides customers even more choice of where and how they run MongoDB. Customer adoption of Azure Arc has grown by leaps and bounds. This new certification, and the launch of MongoDB EA as an Arc-enabled Kubernetes application on Azure Marketplace , means that more customers will be able to leverage the unparalleled security, availability, durability, and performance of MongoDB across environments with the centralized management of their Kubernetes deployments. We are very excited to have MongoDB available for our customers on the Azure Marketplace. By extending Azure Arc’s management capabilities to your MongoDB deployments, customers gain the benefit of centralized governance, enhanced security, and deeper insights into database performance. Azure Arc makes hybrid database management with MongoDB efficient and consistent. Collaboration between MongoDB and Microsoft represents an opportunity for many of our customers to further accelerate their digital transformation when building enterprise-class solutions with Azure Arc. Christa St Pierre, Partner Group Manager, Azure Edge Devices, Microsoft Here are three ways the launch of MongoDB EA on Azure Marketplace for Arc-enabled Kubernetes applications gives customers greater flexibility. 1. MongoDB EA supports multi-Kubernetes cluster deployments, simplifies management MongoDB Enterprise Advanced seamlessly integrates market-leading MongoDB capabilities along with robust enterprise support and tools for self-managed deployments at any scale. This powerful solution includes advanced automation, comprehensive auditing, strong authentication, reliable backup, and insightful monitoring capabilities, all of which work together to ensure security compliance and operational efficiency for organizations of any size. The relationship between MongoDB and Kubernetes is one of strong synergy. With Kubernetes, MongoDB EA really can run anywhere, such as a single deployment spanning on-premises and more than one public cloud Kubernetes cluster. Customers can use the MongoDB Enterprise Kubernetes Operator, a key component of MongoDB Enterprise Advanced, to simplify the management and automation of self-managed MongoDB deployments in Kubernetes. This includes tasks like creating and updating deployments, managing backups, and integrating with various Kubernetes services. The ability of the MongoDB Enterprise Kubernetes Operator to deploy and manage MongoDB deployments that span multiple Kubernetes clusters significantly enhances resilience, improves disaster recovery, and minimizes latency by allowing data to be co-located closer to where it is needed, ensuring optimal performance and reliability. 2. Azure Arc complements MongoDB EA, providing centralized management While MongoDB Enterprise Advanced is already among a select group of databases capable of operating across multiple Kubernetes clusters , it is now also supported in Azure Arc-enabled Kubernetes environments. Azure Arc enables the standardized management of Kubernetes clusters across various environments—including in Azure, on-premises, and even other clouds—while harnessing the power of Azure services. Azure Arc accomplishes this by extending the Azure control plane to standardize security and governance across a wide range of resources and locations. For instance, organizations can centrally monitor all of the Azure Arc-enabled Kubernetes clusters using Azure Monitor for containers , or they can enforce threat protection at scale using Microsoft Defender for Kubernetes. This centralized control significantly reduces the complexity of managing Kubernetes clusters running anywhere, as customers can oversee all resources and apply consistent security and compliance policies across their hybrid environment. 3. Customers can leverage the resilience of MongoDB EA and the centralized governance of Azure Arc Together, these solutions empower organizations to build robust applications across a wide array of environments, whether on-premises or in multi-cloud settings. The combination of MongoDB Enterprise Advanced and the MongoDB Enterprise Operator simplifies the deployment of MongoDB across Kubernetes clusters, allowing organizations to fully leverage enhanced resilience and geographic distribution that surpasses the capabilities of a single Kubernetes cluster. Azure Arc further enhances this synergy by providing centralized management for all of these Kubernetes clusters, regardless of where they are running; for customers running entirely in the public cloud, we recommend using MongoDB’s fully managed developer data platform, MongoDB Atlas. If you’re interested in learning more, we invite you to explore the Azure Marketplace listing for MongoDB Enterprise Advanced for Arc-enabled Kubernetes applications. Please note that aside from use for evaluation and development purposes, this offering requires the purchase of a MongoDB Enterprise Advanced subscription. For licensing inquiries, we encourage you to reach out to MongoDB at https://mongodb.prakticum-team.ru/contact to secure your license and to begin harnessing the full potential of these powerful solutions.

November 19, 2024

Accelerating MongoDB Migration to Azure with Microsoft Migration Factory

Migrating MongoDB workloads from on-premises solutions or other cloud platforms to MongoDB Atlas on Azure has never been simpler, thanks to Microsoft’s Cloud Migration Factory (CMF). This newly created program is perfect for organizations using MongoDB Enterprise Advanced or Community Edition who are ready to modernize. By transitioning to MongoDB Atlas —an integrated suite of data and applications services—customers can simplify their database management, enhance performance, and reduce operational complexities, unlocking new potential and value from their data. Why the Microsoft Cloud Migration Factory (CMF)? The Microsoft CMF offers hands-on delivery for eligible workloads to accelerate customer journeys on Azure at no cost. With repeatable best practices, robust tools, structured processes, and a skilled resource pool, the Microsoft CMF delivery model mitigates technical risk and accelerates deployments with optimized architectures to maximize platform benefits. The MongoDB Migration Factory, meanwhile, is a comprehensive program designed to help organizations migrate their existing databases to MongoDB. This program provides a structured approach, tools, and best practices to ensure a smooth and efficient migration process. Microsoft CMF is partnering with MongoDB Migration Factory to jointly deliver migrations of MongoDB Enterprise Advanced or Community Edition deployments to MongoDB Atlas on Azure in a secure, optimized, and customer-focused way. This comprehensive migration approach enables businesses to leverage Azure for their MongoDB-based solutions with speed, confidence, best practices, and minimal disruption risk at an optimized cost. “This joint delivery offering from Microsoft Cloud Migration Factory (CMF) and MongoDB Migration Factory is designed to accelerate AI transformation priorities for our customers by driving the migrations to MongoDB Atlas on Azure with speed and quality,” said Rashida Hodge, Corporate Vice President of Azure Data and AI at Microsoft. “We have delivered thousands of customer engagements with the CMF model across all Azure workloads, making it a proven approach for accelerating cloud journeys with Microsoft-owned delivery.” Why MongoDB Atlas on Azure? MongoDB Atlas on Azure combines MongoDB’s robust document data platform with Azure’s scalability and advanced cloud services, making it ideal for high-performance applications. Offering features like automatic scaling, high availability, and comprehensive security, MongoDB Atlas on Azure supports diverse workloads, including transaction processing, in-app analytics, and full-text search. Integrations with Azure services—including Azure Synapse Analytics, Microsoft Fabric, and Power BI—enhance MongoDB Atlas’s analytics and visualization capabilities, and compliance with standards like HIPAA and GDPR ensures data privacy, enabling organizations to focus on innovation in a secure, scalable environment. Figure 1: MongoDB Atlas on Azure Integrations ecosystem Migrating MongoDB Community Edition or Enterprise Advanced to MongoDB Atlas on Azure Migrating from MongoDB Community Edition or MongoDB Enterprise Advanced to MongoDB Atlas on Azure offers numerous benefits, including enhanced scalability, security, and operational efficiency. MongoDB Atlas is a fully managed, cloud-based solution that simplifies database management by handling tasks like automatic scaling, high availability, and data backup. Leveraging Azure’s infrastructure, Atlas provides integrated services such as Azure Active Directory for improved authentication and identity management, and global cloud coverage to reduce latency by deploying clusters closer to users. MongoDB Atlas on Azure also includes robust security features like encryption at rest and in transit, network isolation, and advanced access controls, meeting compliance standards. These features are often difficult to implement in a self-managed environment. Additionally, Atlas offers advanced monitoring and automated tuning tools for optimizing database performance and resource usage, helping to reduce costs over time. For organizations considering migration to MongoDB Atlas, Microsoft CMF offers end-to-end guidance, providing a clear roadmap for every stage of the migration process, from initial validation to post-migration testing. With flexible migration paths that cater to a range of needs, Microsoft CMF supports live migrations using tools like mongosync and offline migrations with MongoDB’s native tools, enabling everything from minimal-downtime transitions to complete re-hosting. Best of all, Microsoft CMF is a complimentary service, which means that organizations don’t need to worry about budgets and can focus on the transition to MongoDB Atlas on Azure. In collaboration with MongoDB Professional Services, the CSX team leveraged MongoDB and Microsoft Migration Factory to migrate a mission-critical railroad transportation app quickly and seamlessly with zero downtime. John Maio, Department Head, Enterprise Data & Analytics at CSX Getting started Microsoft CMF’s structured approach guides organizations through each critical milestone to ensure a smooth migration process. For those interested in migrating their MongoDB setup to Azure, contact MongoDB today to take advantage of this free migration opportunity and experience the ease of MongoDB Atlas on Azure with Microsoft CMF support.

November 19, 2024

MongoDB Database Observability: Integrating with Monitoring Tools

This post is the final in a three-part series on leveraging database observability. Welcome back to our series on Leveraging Database Observability! Our previous post showcased a real-world use case highlighting how MongoDB Atlas’s observability tools effectively tackle database performance challenges. Whether you’re a developer, DBA, or DevOps engineer, our mission is to empower you to harness the full potential of your data through our observability suite . Integrating Atlas metrics with your central enterprise observability tools can simplify your operations. By seamlessly working with popular observability tools, our approach helps teams streamline workflows and enhance visibility across systems. Integrating MongoDB Atlas with third-party monitoring tools MongoDB’s developer data platform combines all essential data services for building modern applications within a unified experience. Our purpose-built observability tools for Atlas environments offer automatic monitoring and optimization, guiding diagnostics tailored specifically for MongoDB. Additionally, we extend Atlas metrics into your existing enterprise observability stack, enabling seamless integration without replacing your current tools. This creates a consolidated, single-pane view that unifies Atlas telemetry with other tech and application metrics, ensuring comprehensive visibility into both database and full-stack performance. This integration empowers you to monitor, receive alerts, and make data-driven decisions within your existing workflows, driving greater efficiency. Below is a quick guide to modifying integration settings through the Atlas UI and the popular integrations we support: Navigate to the Project Integrations page in Atlas. Choose the organization and project you want to configure from the navigation bar. On the Project Integrations page, select the third-party services you’d like to integrate. Configure the chosen services with the required API keys and regions. Critical integrations for your observability platform With Atlas’s Datadog and Prometheus integrations, you can send critical MongoDB metrics to these platforms, empowering detailed, real-time monitoring. Through Datadog , you can track database operation counts, query efficiency, and resource usage, ideal for pinpointing bottlenecks and managing resources. Similarly, Prometheus enables you to monitor essential metrics like query times, connection rates, and memory usage, supporting flexible tracking of database health and performance. Both integrations facilitate proactive detection of issues, alert configuration for resource thresholds, and a cohesive view of Atlas data when visualized in Grafana. Atlas’s integration with PagerDuty streamlines incident management by sending metrics like performance alerts, billing anomalies, and security events directly to PagerDuty. This integration records incidents automatically, notifies teams upon alerts, and supports two-way syncing, ensuring resolved alerts in Atlas are reflected in PagerDuty. It enables efficient incident response and resource allocation to maintain system stability. With Atlas integrations for Microsoft Teams and Slack, you can route key metrics—such as query latency, disk usage, and throughput—to these channels for timely updates. Teams can use these insights for real-time performance monitoring, incident response, and collaboration. Notifications through these platforms ensure your team stays informed on database performance, storage health, and user activity changes as they occur. Use case: Centralized observability with MongoDB Atlas, Datadog, and Slack Let’s walk through a hypothetical scenario for ShopSmart, an e-commerce company that leverages MongoDB Atlas to manage its product catalog and customer data. As traffic surges, the DevOps team faces challenges in monitoring application performance and database health effectively. To tackle these challenges, the team leverages MongoDB Atlas’ integration with Datadog and Slack, creating a powerful observability ecosystem. Integrating MongoDB Atlas with Datadog: The team pushes key MongoDB Atlas metrics into Datadog, such as query performance, connection counts, and Atlas Vector Search metrics. With Datadog, they can visualize these metrics and correlate overall MongoDB performance with their other applications. Out-of-the-box monitors and dedicated dashboards allow the team to track metrics like throughput, average read/write latency, and current connections. This visibility helps pinpoint bottlenecks in real time, ensuring optimal database performance and improving overall application responsiveness. Setting up alerts in Datadog: The team configures alerts for critical metrics like high query latency and increased error rates. When thresholds are breached, Datadog instantly notifies the team. This proactive approach allows the team to address potential performance issues before they impact customers. Integrating Datadog with Slack: To ensure fast communication, alerts are sent directly to the dedicated Slack channel, “ShopSmart-Alerts.” This integration fosters seamless collaboration, enabling the team to discuss and resolve issues in real-time. With these integrations, ShopSmart’s engineering team can monitor performance quickly and address issues efficiently. The unified observability approach enhances operational efficiency, improves the customer experience, and supports ShopSmart’s competitive edge in the e-commerce industry. By leveraging MongoDB Atlas, Datadog, and Slack, the team ensures scalable performance and drives continuous innovation. Conclusion MongoDB Atlas empowers developers and organizations to achieve unparalleled observability and control over their database environments. By seamlessly integrating with central enterprise observability tools, Atlas enhances your ability to monitor performance metrics and ensures you can do so within your existing workflows. This means you can focus on building modern applications confidently, knowing you have the insights and alerts necessary to maintain optimal performance. Embrace the power of MongoDB Atlas and transform your approach to database management—because your applications can thrive when your data is observable. And that wraps up our Leveraging Database Observability series! We hope you learned something new and found value in these discussions. Sign up for MongoDB Atlas , our cloud database service, to see database observability in action. To dive deeper and expand your knowledge, check out this learning byte for more insights on the MongoDB observability suite and how it can enhance your database performance.

November 14, 2024

MongoDB, Microsoft Team Up to Enhance Copilot in VS Code

As modern applications grow increasingly complex, developers face the challenge of meeting market demands for faster, smarter solutions. To stay ahead, they need tools that streamline their workflows, available directly in the environments where they build. According to the 2024 Stack Overflow Developer Survey , Microsoft’s Visual Studio Code (VS Code) is the integrated development environment (IDE) of choice for 74% of professional developers, serving as a central hub for building, testing, and deploying applications. With the rise of AI-powered tools like GitHub Copilot—which is used by 44% of professional developers—there’s a growing demand for intelligent assistance in the development process without disrupting flow. At MongoDB, we believe that the future of development lies in democratizing the value of these experiences by incorporating domain-specific knowledge and capabilities directly into developer flows. That’s why we’re thrilled to announce the public preview of MongoDB’s extension to GitHub Copilot in VS Code. With this integration, developers can effortlessly generate MongoDB queries, inspect collection schemas, and get answers from the latest MongoDB docs—all without leaving their IDE. Our collaboration with MongoDB continues to bring powerful, integrated solutions to developers building the modern applications of the future. The new MongoDB extension for GitHub Copilot exemplifies a shared commitment to the developer experience, leveraging AI to ensure that workflows are optimized for developer productivity by keeping everything developers need within reach, without breaking their flow. Isidor Nikolic, Senior Product Manager for VS Code, Microsoft But we’re not stopping there. As AI continues to evolve, so will the ways developers interact with their tools. Stay tuned for more exciting developments next week at Microsoft Ignite , where we’ll unveil more ways we’re pushing the boundaries of what’s possible with AI through MongoDB and Microsoft’s partnership! What is MongoDB's Copilot extension? MongoDB’s Copilot extension supercharges your GitHub Copilot in VS Code with MongoDB domain knowledge. The Copilot integration is built into the MongoDB for VS Code extension , which has more than 1.8M downloads in the VS Code marketplace today. Type ‘@MongoDB’ in Copilot chat and take advantage of three transformative commands: Generate queries from natural language (/query) —this generates accurate MongoDB queries by passing collection schema as context to Github Copilot Query MongoDB documentation (/docs) —this answers any documentation questions using the latest MongoDB documentation through Retrieval-Augmented Generation (RAG) Browse collection schema (/schema) —this provides schema information for any collection and is useful for data modeling with the Copilot extension. Generate queries from natural language This command transforms natural language prompts into MongoDB queries, leveraging your collection schema to produce precise, valid queries. It eliminates the need to manually write complex query syntax, and allows developers to quickly extract data without taking their focus away from building applications. Whether you run the query directly from the Copilot chat or refine it in a MongoDB playground file, we’ve sped up the query-building process by deeply integrating these capabilities into the existing flow of MongoDB VS Code extension. Query MongoDB documentation The /docs command answers MongoDB documentation-specific questions, complemented by direct links to the official documentation site. There’s no need to switch back and forth between your browser and your IDE; the Copilot extension calls out to the MongoDB Documentation Chatbot API that leverages retrieval-augmented generation technology to generate responses that are informed by the most recent version of the MongoDB documentation. In the near future, these questions will be smartly routed to documentation for the specific server version of the cluster you are connected to in the MongoDB VS Code extension. Browse collection schema The /schema command offers quick access to collection schemas, making it easier for developers to access and interact with their data model in real-time. This can be helpful in situations where developers are debugging with Copilot or just want to know valid field names while developing their applications. Developers can additionally export collection schemas into JSON files or ask follow-up questions directly to brainstorm data modeling techniques with the MongoDB Copilot extension. On the Horizon This is just the start of our work on MongoDB’s Copilot extension. As we continue to improve the experience with new features—like translating and testing queries to and from popular programming languages, and in-line query generation in Playgrounds—we remain focused on democratizing AI-driven workflows, empowering developers to access the tools and knowledge they need to build smarter, faster, and more efficiently, right within their existing environments. Download MongoDB’s VS Code extension and enable the MongoDB chat experience to get started today.

November 13, 2024