Oliver Tree

15 results

What’s New From MongoDB at AWS re:Invent 2024

As thousands of attendees make their way home after a week in Vegas—a week packed with learning, product launches, and round-the-clock events—we thought we’d reflect on the show’s highlights. MongoDB was excited to showcase our latest integrations and solutions with Amazon Web Services (AWS), which range from new ways to optimize generative AI, to faster, more cost-effective methods for modernizing applications. But first, we want to thank our friends at AWS for recognizing MongoDB as the AWS Technology Partner of the Year NAMER! This prestigious award recognizes AWS Technology Partners that are using AWS to lower costs, increase agility, and innovate faster. Announced during the re:Invent Partner Awards Gala, the Technology Partner of the Year Award is a testament to the specialization, innovation, and cooperation MongoDB and AWS have jointly brought to customers this year. In addition, MongoDB also received AWS Partner of the Year awards for Italy, Turkey, and Iberia. These awards follow wins in the ASEAN Global Software Partner of the Year and Taiwan Technology Partner of the Year categories earlier in the year, further demonstrating the global reach and popularity of the world’s most popular document database! Harnessing the potential of gen AI If 2024 (and 2023, and 2022…) was the year of gen AI excitement, then 2025 may turn out to be marked by realistic gen AI implementation. Indeed, we’ve seen customers shift their 2025 AI focus toward optimizing resource-intensive gen AI workloads to drive down costs—and to get the most out of this groundbreaking technology. Retrieval-augmented generation (RAG), one of the main ways companies use their data to customize the output of foundation models, has become the focus of this push for optimization. Customers are looking for easier ways to fine-tune their RAG systems, asking questions like, “How do I evaluate the efficiency and accuracy of my current RAG workflow?” To that end, AWS and MongoDB are introducing new services and technologies for enterprises to optimize RAG architecture compute costs, while also maintaining accuracy. First up is vector quantization. By reducing vector storage and memory requirements while preserving performance, these capabilities empower developers to build AI-enriched applications with more scale—and at a lower cost. Leading foundation models like Amazon Titan are already compatible with vector quantization, helping to maintain high accuracy of generated responses while simultaneously reducing costs. You can read more about vector quantization on our blog. As for RAG evaluation, AWS has launched a new feature for Amazon Bedrock called, naturally, RAG Evaluator. This tool allows Bedrock users to evaluate and monitor RAG Apps natively within the Bedrock environment, eliminating the need for third-party frameworks to run tests and comparisons. As a Knowledge Base for Amazon Bedrock, MongoDB Atlas is ready on day one to take advantage of Bedrock RAG Evaluator, allowing companies to gauge and compare the quality of their RAG Apps across different applications. The RAG Evaluator, built on several joint integrations and solutions AWS and MongoDB released in 2024, and vector quantization together can streamline the deployment of enterprise generative AI. For example, in October MongoDB, Anthropic, and AWS announced a joint solution to create a memory-enhanced AI agent . Together, the three partners offer enterprise-grade, trusted, secure technologies to build generative AI apps quickly and flexibly using a family of foundation models in a fully managed environment. Overall, MongoDB and AWS are making it easier—and more cost-effective—for developers to build innovative applications that harness the full potential of generative AI on AWS. From cars to startups to glue MongoDB and AWS have been hard at work on a number of other solutions for developers across industries. Here’s a quick roundup: AWS Amplify + AppSync + MongoDB For startups, or for any organization looking to quickly test and launch applications, speed is everything. That’s why MongoDB teamed up with AWS to create a full-stack solution that provides developers with the same high standards of performance and scalability they would demand for any app. By combining AWS Amplify, AWS AppSync, and MongoDB Atlas, AWS and MongoDB have created a full-stack solution that enables seamless front-end development, robust and scalable backend services, out-of-the-box CI/CD, and a flexible and powerful database solution, allowing developers to drastically reduce the coding time required to launch new applications. Check out this tutorial and repository for a starter template . Digital twins on AWS CMS For those in the automotive sector, MongoDB and AWS have developed a connected mobility solution to help remove the undifferentiated integration, or “technical plumbing” work, of connecting vehicles to the cloud. When used together, Connected Mobility Solution (CMS) on AWS and MongoDB Atlas help accelerate the development of next-generation digital twin use cases and applications, including connected car use cases. MongoDB’s document model allows easy and flexible modeling and storage of connected vehicle sensor data. Read our joint blog with AWS to learn how the MongoDB Atlas document model helps with data modeling of connected vehicles data and how this capability can be leveraged via AWS Automotive Cloud Developer Portal (ACDP). AWS Glue + MongoDB Atlas Speaking of undifferentiated plumbing, MongoDB Atlas is now integrated into AWS Glue’s visual interface. The new integration simplifies data integration between MongoDB Atlas and AWS, making it easy to build efficient ETL (Extract, Transform, Load) pipelines with minimal effort. With its visual interface, AWS Glue allows users to seamlessly transfer, transform, and load data to and from MongoDB Atlas without needing deep technical expertise in Spark or SQL. In this blog post , we look at how AWS Glue and MongoDB Atlas can transform the way you manage data movement. Buy with AWS In the spirit of making things easier for our joint customers, in early 2025 MongoDB will also join the AWS ‘Buy with AWS’ program. Once up and running, Buy With AWS will allow customers to pay for Atlas using their AWS account directly from the Atlas UI, further reducing friction for customers wanting to get started with Atlas on AWS. New Atlas Updates Announced at re:Invent Aside from our joint endeavors with AWS, MongoDB has also been hard at work on improving the core Atlas platform. Here’s an overview of what we announced: Asymmetrical sharding support for Terraform Atlas Provider Customers are constantly seeking ways to optimize costs to ensure they get the best value for their resources. With Asymmetrical Sharding, now available in the Terraform MongoDB Atlas Provider, MongoDB Atlas users can customize the Cluster Tier and IOPS for each shard, encouraging better resource allocation, improved operational efficiency, and cost savings as customer needs evolve. Atlas Flex Tier Our new Atlas Flex tier offers the scaled flexibility of serverless, with the cost-capped assurance of shared tier clusters. With Atlas Flex Tier, developers can build and scale applications cost-effectively without worrying about runaway bills or resource provisioning. New test bench feature in Query Converter At MongoDB, we firmly believe that the document model is the best way for customers to build applications with their data. In our latest update to Relational Migrator , we’ve introduced Generative AI to automatically convert SQL database objects and validate them using the test bench in a fraction of the time, producing deployment-ready code up to 90% faster. This streamlined approach reduces migration risks and manual development effort, enabling fast, efficient, and precise migrations to MongoDB. For more about MongoDB’s work with AWS—including recent announcements and the latest product updates—please visit the MongoDB Atlas on AWS page ! Visit our product page to learn more about MongoDB Atlas .

December 5, 2024

Hanabi Technologies Uses MongoDB to Power AI Assistant, Hana

For all the hype surrounding generative AI, cynics tend to view the few real-world implementations as little more than “fancy chatbots.” But for Abhinav Aggarwal, CEO of Hanabi Technologies , the idea of a generative AI-powered bot that is more than just an assistant was intriguing. “I’d been using ChatGPT since it launched,” said Aggarwal. “That got me thinking: How could we make a chatbot that was like a team member?” And with that concept, Hana was born. The problem with bots “Most generative AI chatbots do not act like people; they wait for a command and give a response,” said Aggarwal. “We wanted to create a human-like chatbot that would proactively help people based on what they wanted—automating reminders, for example, or fetching time zones from your calendar to correctly schedule meetings.” Hanabi’s flagship product, Hana, is an AI assistant designed to enhance team collaboration within Google Chat, working in concert with Google Workspace and its suite of products. “Our target customers are smaller companies of between 10 and 50 people. At this size you’re not going to build your own agent from scratch,” he said. Hana integrates with Google APIs to deliver a human-like assistant that chimes in with helpful interventions, such as automatically setting reminders and making sure meetings are booked in the right time zone for each participant. “Hana is designed to bring AI to smaller companies and help them collaborate in a space where they are already working—Google Workspace,” Aggarwal explained. The MongoDB Atlas solution For Hana to act like a member of the team, Hanabi needed to process massive amounts of data to support advanced features like retrieval-augmented generation (RAG) for better information retrieval across Google Docs and many other sources. And with a rapidly growing user base of over 600 organizations and 17,000+ installs, Hanabi also required a secure, scalable, and high-performing data storage solution. MongoDB Atlas provided a flexible document model, built-in vector database, and scalable cloud-based infrastructure, freeing Hanabi engineers to build new features for Hana rather than focusing on rote tasks like data extract, transform, and load processes or manual scaling and provisioning. Now, MongoDB Atlas handles a variety of responsibilities: Scalability and security: MongoDB Atlas’s auto-scaling and automatic backup features have enabled Hanabi to seamlessly grow its user base without the need for manual database management. RAG: MongoDB Atlas plays a critical role in Hana’s RAG functionality. The platform enables Hanabi to split Google Docs into small sections, create embeddings, and store these sections in Atlas’s vector database. Development Processes: According to Aggarwal, MongoDB’s flexibility in managing changing schemas has been essential to the company’s fast-paced development cycle. Data Visualization: Using MongoDB Atlas Charts has enabled Hanabi to create comprehensive dashboards for real-time data visualization. This has helped the team track usage, set reminders, and optimize performance without needing to build a manual dashboard. Impact and results With MongoDB Atlas, Hanabi can successfully scale Hana to meet the demands of its rapidly expanding user base. The integration is also enabling Hana to offer powerful features like automatic interactions with customers, advanced information retrieval from Google Docs, and manually added memory snippets, making it an essential tool for teams around the world. Next steps Hanabi plans to continue integrating more tools into Hana while expanding its reach to personal Gmail users. The company is also rolling out a new automatic-interaction feature, further enhancing Hana’s ability to proactively assist users without direct commands. MongoDB Atlas remains a key component of Hanabi’s stack, alongside Google Kubernetes Engine, NestJS, and LangChain, enabling Hanabi to focus on innovating to improve the customer experience. Tech Stack MongoDB Atlas Google Kubernetes Engine NestJS LangChain Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free MongoDB Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com.

November 21, 2024

Bringing Gen AI Into The Real World with Ramblr and MongoDB

How do you bring the benefits of gen AI, a technology typically experienced on a keyboard and screen, into the physical world? That's the problem the team at Ramblr.ai , a San Francisco-based startup, is solving with its powerful and versatile 3D annotation and recognition capabilities. “With Ramblr you can record continuously what you are doing, and then ask the computer, in natural language, ‘Where did I go wrong’ or ‘What should I do next?” said Frank Angermann, Lead Pipeline & Infrastructure Engineer at Ramblr.ai. Gen AI for the real world One of the best examples of Ramblr’s technology, and its potential, is its work with the international chemical giant BASF. In a video demonstration on Ramblr’s website, a BASF engineer can be seen tightening bolts on a connector (or ‘flange’) joining two parts of a pipeline. Every move the engineer makes is recorded via a helmet-mounted camera. Once the worker is finished for the day this footage, and the footage of every other person working on the pipeline, is uploaded to a database. Using Ramblr’s technology, quality assurance engineers from BASF then query the collected footage from every worker, asking the software to, ‘Please assess footage from today’s pipeline connection work and see if any of the bolts were not tightened enough.’ Having processed the footage, Ramblr assesses whether those flanges had been assembled correctly and identifies any that required further inspection or correction. The method behind the magic “We started Ramblr.ai as an annotation platform, a place where customers could easily label images from a video and have machine learning models then identify that annotation throughout the video automatically,” said Frank. “In the past this work would be carried out manually by thousands of low-paid workers tagging videos by hand. We thought we could be better by automating that process,” he added. The software allows customers to easily customize and add annotations to footage for their particular use case, and with its gen-AI powered active learning approach Ramblr then ‘fills in’ the rest of the video based on those annotations. Why MongoDB? MongoDB has been part of the Ramblr technology stack since the beginning. “We use MongoDB Atlas for half of our storage processes. Metadata, annotation data, etc., can all be stored in the same database. This means we don’t have to rely on separate databases to store different types of data,” said Frank. Flexibility of data storage was also a key consideration when choosing a database. “With MongoDB Atlas, we could store information the way we wanted to,” he added. The built-in vector database capabilities of Atlas were also appealing to the Rambler team, “The ability to store vector embeddings without having to do any more work - for instance not having to move a 3mb array of data somewhere else to process it, was a big bonus for us.” The future Aside from infrastructure and construction Q&A, robotics is another area in which the Ramblr team is eager to deploy their technology. “Smaller robotics companies don’t typically have the data to train the models that inform their products. There are quite a few use cases where we could support these companies and provide a more efficient and cost-effective way to teach the robots more efficiently. We are extremely efficient in providing information for object detectors,” said Frank. But while there are plenty of commercial uses for Ramblr’s technology, the growth in spatial computing in the consumer sector - especially following the release of Apple’s Vision Pro and Meta Quest headsets - opens up a whole new category of use cases. “Spatial computing will be a big part of the world. Being able to understand the particular processes, taxonomy, and what the person is actually seeing in front of them will be a vital part of the next wave of innovation in user interfaces and the evolution of gen AI,” Frank added. Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com. Head over to our quick-start guide to get started with Atlas Vector Search today.

September 30, 2024

MongoDB Enables AI-Powered Legal Searches with Qura

The launch of ChatGPT in November 2022 caught the world by surprise. But while the rest of us marveled at the novelty of its human-like responses, the founders of Qura immediately saw another, more focused use case. “Legal data is a mess,” said Kevin Kastberg, CTO for Qura. “The average lawyer spends tens of hours each month on manual research. We thought to ourselves, ‘what impact would this new LLM technology have on the way lawyers search for information?’” And with that, Qura was born. Gaining trust From its base in Stockholm, Sweden, Qura set about building an AI-powered legal search engine. The team trained custom models and did continual pre-training on millions of pages of publicly available legal texts, looking to bring the comprehensive power of LLMs to the complex and intricate language of the law. “Legal searches have typically been done via keyword search,” said Kastberg. “ We wanted to bring the power of LLMs to this field. ChatGPT created hype around the ability of LLMs to write. Qura is one of the first startups to showcase their far more impressive ability to read. LLMs can read and analyze, on a logical and semantic level, millions of pages of textual data in seconds. This is a game changer for legal search.” Unlike other AI-powered applications, Qura is not interested in generating summaries or “answers” to the questions posed by lawyers or researchers. Instead, Qura aims to provide customers with the best sources and information. “We deliberately wanted to stay away from generative AI. Our customers can be sure that with Qura there is no risk of hallucinations or bad interpretation. Put another way, we will not put an answer in your mouth; rather, we give you the best possible information to create that answer yourselves,” said Kastberg. “Our users are looking for hard-to-find sources, not a gen AI-summary of the basic sources,” he added. With this mantra, the company claims to have reduced research times by 78% while surfacing double the number of relevant sources when compared to similar legal search products. MongoDB in the mix Qura has worked with MongoDB since the beginning. “We needed a document database for flexibility. MongoDB was really convenient as we had a lot of unstructured data with many different characteristics.” In addition to the flexibility to adapt to different data types, MongoDB also offered the Qura team lightning-fast search capabilities. “ MongoDB Atlas search is a crucial tool for our search algorithm agents to navigate our huge datasets. This is especially true of the speed at which we can do efficient text searches on huge corpuses of text, an important part for navigating documents,” said Kastberg. And when it came to AI, a vector database to store and retrieve embeddings was also a real benefit. “Having vector search built into Atlas was convenient and offered an efficient way to work with embeddings and vectorized data.” What's next? Qura's larger goal is to bring about the next generation of intelligent search. The legal space is only the start, and the company has larger ambitions to expand beyond Sweden and into other industries too. “We are live with Qura in the legal space in Sweden and currently onboarding EU customers in the coming month. What we are building towards is a new way of navigating huge text databases, and that could be applied to any type of text data, in any industry,” said Kastberg. Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com. Head over to our quick-start guide to get started with Atlas Vector Search today.

September 18, 2024

Boosting Customer Lifetime Value with Agmeta and MongoDB

Nobody likes calling customer service. The phone trees, the wait times, the janky music, and how often your issue just isn’t resolved can make the whole process one most people would rather avoid. For business owners, the customer contact center can also be a source of frustration, simultaneously creating customer churn and unhappiness, while also acting as a black hole of information as to why that churn occurred. It doesn’t have to be this way. What if instead, customer service centers offered valuable ways to increase the Customer Lifetime Value (CLTV) of customers, pipelines of upsell opportunities, and valuable sources of information? That’s the goal of Agmeta.AI , a startup dedicated to giving businesses actionable insights to fight churn, identify key customers primed for upsell, and improve customer service overall. Lost in translation “We started with a very simple thesis±people call into contact centers because they have a problem. That is a real make-or-break moment. The opportunity for churn is very high… or that customer can be a great target for upselling,” said Samir Agarwal, CEO and co-founder of Agmeta. “All of this data sits in a contact center, and businesses don't ever get to see it,” he added. According to Samir, even the businesses that think they are collecting useful information on customer service interactions are instead collecting incorrect or incomplete information. Or worse, they’re analyzing the information they do record incorrectly. Every business today talks about the importance of customer experience (CX), but the challenge businesses face is how they quantify that CX. Many contact centers substitute call sentiment for CX, or use keywords to determine canned responses. For example, imagine if a customer calls into a service center and they have what appears to be a positive conversation with an agent. They use words and phrases like “thank you,” and “yes, I understand,” and reply “no, I do not have anything else to ask” at the end of a call in which their complaint is not resolved. After putting the phone down, the customer goes on to cancel the service, or worse, initiate a chargeback request with their credit card provider. In some businesses the customer service agent may manually mark such a call as positive’ The agent, after all, ‘answered all the customers' concerns.’ As this example illustrates, the sentiment of a call should not be confused with the measure of customer experience. Another common way businesses try to gather feedback is by sending a post-call survey. However, a problem with this approach is that industry response rates for surveys are close to 3%. This implies that decisions get made on that small sample, and may not take into account the other 97% of the customers who didn’t respond to the survey. Survey results are also frequently skewed, as those most likely to respond are also the ones who were most unhappy with the contact center interaction and want their voices heard. The MongoDB advantage Using machine learning and generative AI, backed by MongoDB Atlas , Agmeta’s software understands not only the content of the call, but the context too. Taking our example above, Agmeta’s software would detect that the customer is unhappy, despite their polite and ‘positive’ sounding conversation with the agent, and flag the customer as a potential churn or chargeback candidate in need of immediate attention. “We will give you a CSAT (customer satisfaction) score and a reason for that CSAT score within seconds of the call ending±for 100% of the interactions,” said Samir. For Agmeta to work, Samir and his team had to have a database ready to accept all kinds of data, including voice recordings, unstructured text, and constantly evolving schema. “We didn’t have a fixed schema, we needed a database that was as flexible as Agmeta needed to be. I’ve known of MongoDB forever, so when I started to look at databases it seemed an obvious choice to me,” he said. The ability to quickly and easily work with vectorized data for gen AI was also crucial. “MongoDB provides vector search capabilities in an operational database. Rather than having to add a bolt on a vector database and figure out the ETL, MongoDB solved this issue for me in a single product. The way I look at it, if you do a good job on Vector search, then my life as an entrepreneur and software builder becomes much easier,” Samir said. After assessing database options and multiple LLMs, Samir and his team chose to pair MongoDB Atlas with Google Cloud, taking advantage of Gemini on Google’s generative AI platform. “With Atlas on Google Cloud, there are zero worries about database administration, maintenance, and availability. This frees us up to focus on creating business value,” Samir said. “Another benefit of using MongoDB is the flexibility to use the customer’s MongoDB setup which gives the customer the peace of mind from the perspective of security and privacy of their data.” Customer service first With the power of generative AI and MongoDB, Agmeta can deliver a CSAT score that measures the customers’ true takeaway from the call. The CSAT score is a multi-dimensional score that takes into account areas including resolution (as the customer sees it), politeness, the onus on the customer, and many other attributes. In the short term, the primary use for this technology is to detect and flag those customers at risk of churn, filing a charge dispute with their card provider, or potentially upselling, giving businesses an opportunity to “see” what they could never find out before. “When we talk to customers, the number one thing they are concerned about is customer churn. Right now they operate completely blind with no idea why people are leaving them,” said Samir. “One large telecoms customer Agmeta is in talks with had no idea where their churn was happening. But when we described being able to assign every customer a CSAT score, they were very excited,” he added. And it’s not just about preventing churn. Businesses can identify happy customers too, targeting them for upsell opportunities. “One of the things we do is spot patterns of unanswered questions from product support interactions,” Samir added. “When we see ‘Oh look, suddenly there are a lot more calls because of a release,’ then we can flag this to product teams as a must-fix issue.” The future of customer service Agmeta aims to amalgamate customer information with current and past experiences to provide businesses a more holistic±and nuanced—picture of their customers, and more precise next steps they can take. “What we want to do is look back in time and see what else happened with this customer,” Samir said. “The goal is to provide businesses with targeted directives to minimize churn and grow customer lifetime value.” Retrieval-augmented generation plays a key role in Agmeta’s vision. This also means an expanded role for both MongoDB’s vector database as the source of information against which semantic searches can be run, as well as Gemini for both analysis and presentation of the directives for the business. You can learn more about how innovators across the world are using MongoDB by reviewing our Building AI case studies . If your team is building AI apps, sign up for the AI Innovators Program . Successful companies get access to free Atlas credits and technical enablement, as well as connections into the broader AI ecosystem. Additionally, if your company is interested in being featured in a story like this, we'd love to hear from you! Reach out to us at ai_adopters@mongodb.com . Head over to our quick-start guide to get started with Atlas Vector Search today.

September 10, 2024

Agnostiq & MongoDB: High-Performance Computing for All

Material scientists, computational biologists, and AI researchers all have at least one thing in common; the need for huge amounts of processing power, or ‘compute’, to turn raw data into results. But here’s the problem. Many researchers lack the skills needed to build the workflows that move data through huge networks of distributed servers, CPUs, and GPUs that actually do the number crunching. And that’s where Agnostiq comes in. Since the company’s inception in 2018, Agnostiq has put the power of high-performance computing (HPC) in the hands of researchers, bypassing the need for development expertise to build these essential data and compute pipelines. Power to the people “We started research on high-performance computing needs in fields like finance and chemistry, and through the process of onboarding, researchers quickly realized how hard it was for [researchers] to access and scale up on the cloud, or tap into HPC and GPU resources,'' said Santosh Kumar Radha, Agnostiq’s Head of Product. “If you wanted to scale up, there were not many tools available in the major cloud providers to do this.” To address this bottleneck, the team at Agnostiq built Covalent, a Python-based framework that allows researchers to easily design and run massive compute jobs on cloud platforms, on-prem clusters, and HPC services. With Covalent, startups and enterprises can build any AI or HPC application in a simple, scalable, and cost-effective way using a Python notebook, negating the need to interact with underlying infrastructure. One of the hardest challenges the Covalent team faced was combining traditional HPC with modern cloud technology. Because traditional HPC infrastructure was never designed to run in the cloud, the team spent considerable resources marrying techniques like GPU and CPU parallelization, task parallelization, and graph optimization with distributed cloud computing environments. As a result, researchers can use Covalent to quickly create a workflow that combines the convenience of cloud computing with specialized GPU providers and other HPC services. Everything, everywhere, all at once As the name suggests, Agnostiq has always focused on making their platform as open and resource neutral as possible. MongoDB Atlas , with its native multi-cloud capability, was a perfect complement. “At Agnostiq, everything we build has to be technology and vendor neutral. Interoperability is key for us,” said Radha. “We do all the mapping for our customers, so our platform has to perform a seamless transition from cloud to cloud.” The ability to move data between clouds became even more critical following the release of ChatGPT. With an explosion in generative AI research and development, the availability of GPU resources plummeted. “Resource scarcity in the ‘GPT era’ means you couldn’t get access to GPUs anywhere,” Radha added. “If you didn’t have a default cloud posture, you were nowhere, which is why we doubled down on multi-cloud and MongoDB Atlas to give our clients that optionality.” Open source opening doors Since the beginning, the team at Agnostiq has chosen MongoDB as their default NoSQL database. At first, the team adopted MongoDB’s free, open source product. “We didn’t have any DBAs as a small agile team. MongoDB gave us the freedom to build and manage our data workflows without the need for a specialist,” said William Cunningham, Head of HPC at Agnostiq. As their customer base grew along with the demand for cloud computing access, Agnostiq moved to MongoDB Atlas, gaining the freedom to move data seamlessly between AWS, Google Cloud, and Microsoft Azure. This gave Covalent the flexibility to reach multi-cloud compatibility at a faster rate than with standard tooling. Covalent provides a workflow management service by registering jobs, dispatching IDs, and collecting other metadata that allows fellow researchers and developers to reproduce the original work. MongoDB is used in the front-end, allowing a high volume of metadata and other assets to be published and cached in accordance with an event-driven architecture. This near real-time experience is key to a product aimed at delivering a unified view over distributed resources. MongoDB Atlas further provided the autoscaling required to grow with the user base and the number of workloads while keeping costs in check. “MongoDB Atlas helps us provide an ideal foundation for modern HPC and AI applications which require serverless compute, autoscaling resources, distributed workloads, and rapidly reconfigurable infrastructure,” added Radha. The future Looking to the future, Agnostiq is focused on servicing the huge demand for gen AI modeling and workflow building. To that end, the company released its own inference service called Function Serve within Covalent. Function Serve offers customers a complete, enterprise-grade solution for AI development and deployment, supporting serverless AI model training and fine-tuning. With Function Serve, customers can fine-tune, host, and serve any open-source or proprietary model with full infrastructure abstraction, all with only a few additional lines of code. MongoDB Atlas was used to rapidly develop a minimal service catalog while remaining cloud-agnostic. Looking ahead, the team plans to leverage MongoDB Atlas for enterprise and hybrid-cloud deployments in order to quickly meet customers in their existing cloud platforms. Agnostiq is a member of the MongoDB AI Innovators program , providing their team with access to Atlas credits and technical best practices. You can get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials available in our AI resources center . Additionally, if your company is interested in being featured, we'd love to hear from you. Reach out to us at ai_adopters@mongodb.com .

August 5, 2024

How Canara HSBC Life Insurance Optimized Costs and Claims Processing with MongoDB

Since 2008, Canara HSBC Life Insurance has focused relentlessly on bringing a fresh perspective to an industry known more for stability and conservatism rather than innovation. Since its inception in 2008 as a joint venture between Canara Bank and HSBC Insurance, Canara HSBC Life Insurance has strived to differentiate itself from the competition through enhanced customer interactions, launching cutting-edge digital products, and integrating digital services that cater to the evolving needs of customers. For the past six years Chief Operating Officer, Mr. Sachin Dutta, has been on a mission to bring this customer-first mindset to the digital products and touchpoints his team creates. Speaking at MongoDB’s annual .local developer conference in Delhi, Dutta outlined Canara HSBC Life Insurance’s ongoing digital transformation journey, and how his team's focus on customer success and business efficiency led them to work with MongoDB for improved efficiencies and results. “I truly value the partnership we have with MongoDB. We are building a future-ready organization, and this partnership clearly helps us achieve our aim of reaching the last mile possible in customer servicing. Mr. Sachin Dutta, Chief Operating Officer, Canara HSBC Modernizing the architecture and driving developer efficiency Canara HSBC’s digital transformation was centered on three technical pillars: the cloud, analytics, and mobility. The company focused on creating a more integrated organization and automating manual processes within the system. “We try to remove human intervention with a life insurance policy delivered in seconds and claims that are settled virtually in seconds,” Dutta says. To get there, Canara HSBC Life Insurance had to move on from its existing architecture, which required multifaceted changes and several new implementations: Monolithic applications made alterations a time-consuming process A reliance on rigid relational databases prolonged development timelines, forcing developers to spend time wrangling data when they could be building better products for customers. The fully on-premises system had supported the organization in the past but required future-proofing to support growth and deliver a better customer experience. Because of this valuable development time and money were spent managing, patching, and scaling databases, rather than getting new products into the hands of customers. These technical issues impacted the speed of business, particularly during month-end and year-end data processing, when the volumes were high. In addition, batch processing stood in the way of creating the real-time availability of information customers wanted. Dutta and his senior team also realized that their existing infrastructure would make it more challenging to find the right talent in the market, as the existing infrastructure was increasingly becoming outdated. Dutta realized early on that, in order for Canara HSBC to attract and retain the best and brightest developers, the insurer had to offer the chance to work with the latest technologies. Platforms like MongoDB would be integral to this effort. “I want to create an organization that is attracting talent and where people start to enjoy their work, and that benefit then gets passed on to the customers, ” Mr. Dutta says. Looking to overhaul its existing infrastructure, Canara HSBC Life Insurance wanted to move fast and hire the talent required to best serve its end customers. Dutta summarized the situation succinctly: "We found that some of those relational structures that had worked for us would not take us through the next 10 years.” Migrating to a secure, fully managed database platform After evaluating the solutions on the market, the team decided to transition from their existing on-premises relational databases, like IBM DB2, MySQL, and Postgres, to MongoDB Atlas . In the last six years of my work, I’m pleased to say that MongoDB has seamlessly integrated all the processes in the backend. We migrated from a completely legacy-based setup to the new fully managed MongoDB service to enhance IT productivity Mr. Sachin Dutta The first stage of the journey was moving from monolithic applications and relational databases to a microservices architecture. With its flexible schema and capabilities for redundancy, automation, and scalability, MongoDB served as the best partner to help facilitate the transition. Next, the team moved to modernize key parts of the business, such as underwriting, freeing their data to power more automation in straight-through processing (STP) of policies and faster claims processing. The adoption of a hybrid cloud model shifted Canara HSBC Life Insurance away from on-premises databases to MongoDB Atlas. As a fully managed cloud database, MongoDB Atlas solves issues related to scalability, database management, and overall reliability. MongoDB Atlas is also cloud agnostic, giving the insurance company an option to work with Azure, AWS, and Google Cloud. Mongo Atlas’ BI Connector bridged the gap between MongoDB and traditional BI tools. This seamless integration allowed Canara HSBC Life Insurance to deploy its preferred reporting tools and, when coupled with MongoDB Atlas’ real-time analytics capability, made batch processing a thing of the past. Halving delivery times and driving business efficiencies Moving to MongoDB Atlas has had a profound impact on the breadth of digital experiences Canara HSBC Life Insurance can offer customers and the speed at which new products can be developed. Something that used to take months, with the implementation of our new tools could be completed in a couple of weeks or days Mr. Sachin Dutta And it’s not only the customer experience and product delivery that has benefited from the partnership. Canara HSBC Life Insurance has also realized substantial efficiency gains and savings as a result of working with MongoDB. We are leveraging artificial intelligence as a core capability to predict human behavior and auto-underwrite policies wherein around half of the policies issued today are issued by the system Mr. Sachin Dutta Highlighted results include: Straight-through processing (STP) surged from 37% to an impressive 60%. This is set to increase further with AI/ML integrations and rule suggestions. Policy issuance turnaround time improved by 60%. Efficiency in operations led to a 20% cost-saving per policy issuance. Canara HSBC experienced 2x top-line growth due to seamless integration with analytical tools. Looking ahead, Canara HSBC Life Insurance has already outlined three key areas where the MongoDB partnership will grow. First, Dutta wants to take advantage of MongoDB Atlas’ flexible document data model to collect and organize data on customers from across the business, making MongoDB Atlas the sole database at Canara HSBC Life Insurance and creating a true customer 360 data layer to power sophisticated data analytics. In financial services, this capability is referred to as know your customer (KYC). “We want to build a data layer that provides a unique experience to the customer after getting to know them,” he says. “That’ll help the company generate better NPS scores and retain customers.” Second, the adoption and integration of AI and machine learning tools also factor heavily into future plans. MongoDB Atlas, with its flexible schema, compatibility with various machine learning platforms, and AI-specific features — such as Vector Search and storage — is a good fit for the company. In Dutta's words, "We are going to scale up and capture the GenAI space.” Last, Dutta wants to take advantage of the MongoDB Atlas SQL interface, connectors, and drivers to augment business intelligence for reporting and precise SQL-based report conversions. Learn More about how MongoDB Works with global Insurers

December 4, 2023

Why Leading Insurer Manulife Ditched SQL For MongoDB

Manulife, one of the largest life insurance companies in the world, is in the midst of a digital transformation. Earlier this year, Harry Cheung, Chief Architect of Manulife Asia, spoke to industry experts and developers at MongoDB.local in Hong Kong, outlining the transformation journey so far and what’s next for Manulife. Better experiences, happier customers Manulife, like many large enterprises, is under pressure to get new digital products to market, fast. In addition, the insurer is constantly looking for ways to better connect with and serve customers, in real time, by broadening their digital capabilities and further personalizing the interactions customers have with Manulife. Manulife’s existing data infrastructure, however, was becoming a drag on innovation. Traditional relational databases limited how fast the Manulife team could bring new digital products to market. In particular, Manulife’s developers, the architects of these new digital products and services, faced issues working with the existing data infrastructure, including the need to constantly optimize the database, deal with data normalization issues, and work with slow querying of data. From Relational to NoSQL to MongoDB From the outset, Manulife knew that they would build their new digital experience on a NoSQL database. NoSQL is core to our strategy of building our digital experience. The flexible data model [for NoSQL] means you’re not limited by the schema. Harry Cheung, Chief Architect, Manulife Asia After deciding to go the NoSQL, Manulife was won over to MongoDB for several reasons, including: The document data model: MongoDB's document data model means no rigid schemas to slow down development. This allows for faster iterations when building new digital products. From on-premises to the cloud: Moving from a MongoDB on-premises deployment to MongoDB Atlas in the cloud was easy for the Manulife team. Scalability: MongoDB can easily scale horizontally to meet spikes in demand. Enterprise-ready & mature: MongoDB is used by the world’s largest insurers, offering greater flexibility alongside the sorts of core requirements you would expect from an RDBMS, such as ACID transactions. MongoDB Support: Assistance with projects like data migration from on-premises to cloud services on MongoDB Atlas made the transition smoother. A pay-as-you-go model: MongoDB’s elastic scaling capabilities and flexible pricing model keep costs down. On and offline functionality: MongoDB Atlas has built-in mobile device synchronization capabilities, speeding up the development of offline-first insurance applications. Built with MongoDB: Four Use Cases for Manulife MOVE, a Health-Focused App: MOVE is a digital app that encourages users to meet fitness goals, with daily steps linked to insurance premium discounts. MongoDB's JSON-based document model simplified app development and data management. Secondly, Manulife started running the MOVE app on-premises. When they wanted to migrate the app to a public cloud of their choice (from MongoDB to MongoDB Atlas) the process was seamless. Sales Assistance App: Used by 90% of agents, this app helps Manulife agents in the field service customers and complete applications. One area where MongoDB Atlas was particularly helpful was mitigating issues with mobile connectivity and data synchronization. Agents in the field often suffer from internet service interruptions, such as a dropped mobile signal. When the agent’s sales app reconnects, the data from the app has to be synchronized with the backend MongoDB database. Building apps that can handle such offline/online data synchronization, also known as offline-first apps, can significantly eat into development time, slowing time to value for organizations developing robust offline-first apps. MongoDB Atlas Device Sync solves this issue with native offline to online synchronization capabilities to enable uninterrupted client interactions, even in low connectivity areas. Using Atlas Device Sync, the sales app can store customer, proposal, application, and document metadata on the local device (using MongoDB’s dedicated mobile device database), and then synchronize that data and the customer application to the main MongoDB database when connected to the internet. Manulife launched their sales app's offline mode in just 2 months with MongoDB Atlas Device Sync Policy Life Cycle Management: Traditional relational databases spread policy data across multiple tables. With MongoDB, a single document can encapsulate an entire policy, streamlining querying access and enhancing performance. MongoDB is now the system of record for policy servicing and life cycle management. This new system was met with overwhelming approval from Manulife’s developers. In the past, we were using a traditional relational database, with more than 500 core tables. With MongoDB, when I asked developers who had previously used our traditional [RDBMS] database, ‘You have a choice, do you want to use MongoDB or go back to the traditional [database]?’ all our developers said MongoDB. Harry Cheung, Chief Architect, Manulife Asia Claims Processing: MongoDB's capability to handle structured and unstructured data simplified integration with partners, especially in Optical Character Recognition (OCR) for claim processes. Looking ahead Manulife is set on expanding its use of NoSQL databases, with MongoDB identified as the go-to solution for such projects. MongoDB is our internal standard. MongoDB is our strategic partner for NoSQL development. Harry Cheung, Chief Architect, Manulife Asia About Manulife Manulife Financial Corporation is one of the largest life insurance companies in the world. The company provides insurance and financial services to millions of customers in Asia, Canada, and the United States. Manulife operates under different brand names: Manulife in North America and Asia, and John Hancock in the U.S. It's recognized for its long-standing presence in Hong Kong, with a focus on life insurance, mutual funds, and other financial products. In addition to life insurance, Manulife offers a wide range of financial services including wealth and asset management, group benefits, and retirement services. Learn more about our work with the world's leading insurers on our MongoDB for Insurance page.

November 16, 2023

MongoDB Atlas for Telecommunications Launches in Dallas, Alongside AT&T and Cisco

MongoDB has officially launched MongoDB Atlas for Telecommunications, a new program for telecommunications companies to accelerate innovation and maximize the use of their data to better serve customers. The program includes expert-led innovation workshops, tailored technology partnerships, and industry-specific knowledge accelerators that provide customized training paths designed for modernization and innovative digital services leveraging the potential of modern 5G networks. Visit the MongoDB Atlas for Industries page to learn more. MongoDB Atlas for Telecommunications launched against a backdrop of hundreds of developers and industry IT leaders at our MongoDB.local gathering in Dallas. Throughout the event, speakers and product announcements emphasized how the telecommunications industry is currently utilizing MongoDB Atlas to gain a competitive advantage. Atlas for the Edge 5G has opened up new revenue streams for Communication Service Providers (CSPs), with whole new industries racing to take advantage of the up to 100x improvements in speed and throughput that these new networks offer. However, while these networks offer significant speed enhancements, the underlying cloud computing environments, without any optimization, remain the primary bottleneck to delivering the sorts of data processing capabilities that these new, ultra-low latency applications demand. Atlas for the Edge addresses this issue by providing reliable data connectivity across the cloud, data centers, and devices, catering to critical real-time use cases like machine learning, disaster recovery, and autonomous vehicles. MongoDB's Atlas developer data platform ensures consistent data management, irrespective of the data's origin, storage location, or destination. This solution not only offers the ability to deploy MongoDB at any edge location, thereby enhancing performance and cost efficiency but also unifies data across various sources, ensuring a singular, dependable data source. Furthermore, with Atlas Stream Processing , real-time data processing from numerous devices, including sensors and mobile phones, is made possible. This allows for functions like anomaly detection and predictive maintenance on data sets. In terms of security, data encryption is ensured at all stages, and users can also benefit from advanced access controls or integrate with external identity management solutions. You can learn more about how MongoDB and Verizon are building the next generation of mobile 5G networks on our Mobile Edge Computing page. Atlas Stream Processing Atlas Stream Processing is especially beneficial for CSPs as it aids in real-time network performance analysis, ensuring quicker responses and enhancing customer experiences. With the ability to quickly detect anomalies, CSPs can maintain consistent network performance. Moreover, with data security as a priority in today's digital landscape, Atlas Stream Processing encrypts data both in transit and at rest, ensuring the safe processing of CSP data streams. For CSPs, being able to adapt and react in real time is essential. By leveraging Atlas Stream Processing, they can optimize operations, offer improved services, and make data-driven decisions promptly, ensuring they remain competitive in the ever-evolving industry. Vector Search Atlas Vector Search , a new addition to the MongoDB Atlas product line, enables CSPs to build intelligent applications powered by semantic search and generative AI over unstructured data, giving customers results that go beyond keyword matching and that infer meaning and intent from a user’s search term. By employing Atlas Vector Search, CSPs can quickly sift through vast datasets, including customer profiles, service logs, or network patterns, to find relevant insights. Such capabilities can enhance customer service, as CSPs can more readily understand user behavior and preferences, helping to uncover new opportunities to address customer needs. Additionally, as CSPs diversify their offerings and integrate more with digital services, the ability to conduct nuanced searches becomes essential for product innovation and market differentiation. Learn more about Atlas Vector Search on our product page . Attendees of MongoDB.local Dallas were also treated to talks by Luke Rice, Director of Technology at AT&T, and Shaun Roberts, Principal Engineer at Cisco, both of whom outlined how they were utilizing MongoDB Atlas to transform how they do business. Going on a modernization journey with AT&T Rice presented an overview of AT&T's platform for address management, validation, and qualification. This platform is crucial for the accurate deployment of products and services to customers, determining service eligibility based on location, and handling address inconsistencies from various sources. It supports both geocoding and reverse geocoding, translating addresses to geospatial coordinates and vice versa. It interacts with third-party address data providers and has the ability to match and merge inconsistent address data. The system's importance extends beyond mere validation and service qualification, heavily influencing much of AT&T's product and service lifecycle. From planning where to build out infrastructure, to working with construction and engineering during builds, and setting up services in areas where addresses are yet to be finalized, the system plays a pivotal role. Additionally, it aids in sales, order provisioning, billing, and dispatching for service management, handling around 380 million unique addresses and managing around 14 million daily transactions. Currently, AT&T’s address management system consists of several aging and sometimes overlapping systems, creating issues with maintenance and efficiency. AT&T is on a modernization journey to integrate approximately 12 of these systems into a singular solution, the Intelligent Network Location Application Platform (IN-LAP). At the center of IN-LAP is MongoDB Atlas. The primary benefit of MongoDB Atlas to AT&T is its simplified and flexible data structure, which helps them create their single source of truth. And with MongoDB’s flexible schema, AT&T is ready to continuously adapt digital products to new data demands and technologies, like edge computing and AI, without extensive and time-consuming database redesign. MongoDB Atlas also offers AT&T benefits from reduced data duplication, multiple data ingestion options, and built-in geospatial functions, allowing them to perform calculations like point-to-point distances without third-party tools. We do require our Solutions like [IN-LAP] to be multi-region, and so that [multi-cloud] being built into the Atlas platform enables me to just focus on building value. Luke Rice, Director of Technology at AT&T Ultimately, accelerating time to market for new products is key for Rice and AT&T. MongoDB Atlas offers AT&T a level of "platformization" that allows developers to focus solely on delivering business value, relieving them of the operational intricacies and management responsibilities of running a large and mission-critical database. Lastly, MongoDB’s native multi-region, multi-cloud support gives Rice and his team reassurance that they can easily scale IN-LAP to different regions and countries in the future. How Cisco empowers its workforce to build a better customer experience Roberts shared how Cisco’s Customer Experience (CX) team leverages MongoDB in the company’s groundbreaking Ascension product. Ascension is a platform that empowers people from across the company, including those with little to no coding knowledge, to develop customer-focused innovations using a cloud-native, low-to-no-code solution. Shaun and his CX team had specific requirements for Ascension. They wanted: A scalable, production-grade system. To engage a broader spectrum of their development base, including those not traditionally involved in coding. A cutting-edge, compliant solution upholding Cisco's stringent security standards. A solution that could be managed by a compact, volunteer-based tiger team. Initially, they experimented with a range of on-prem solutions, including OpenStack, VMware, Arango, and Microsoft SQL. Unfortunately, none met their criteria, particularly when it came to scaling the usage of Ascension. This was especially true for their core application, the Cisco Virtual Tac Engineer, which manages tens of thousands of cases daily. Ascension cut development time by 40% to 60% compared with traditional methods. In the end, to meet their ambitious goals and stringent requirements, Cisco built its solution using MongoDB Atlas. MongoDB Atlas now serves as the primary database for Ascension and also operates as the de facto database-as-a-service (DBaaS) for client apps built using Ascension, with 98% of all applications built by Ascension users running on MongoDB Atlas. With MongoDB at its core, Ascension helps power some of the most exciting innovations across Cisco Customer Experience, helping the company bring amazing efficiency and service to its customers. The results speak for themselves: Ascension cut development time by 40% to 60% compared with traditional methods. Over a year, Ascension user numbers surged from 20 to over 900. Ascension processes 450,000 to 500,000 workflows daily, amounting to about 12 million monthly. Ascension uptime stands at an impressive 99.9%, with the minor downtime attributed to upgrades. On average, an engineer at Cisco dedicating time to Ascension spends 12 hours a week on it. Learn more about how to take advantage of the MongoDB Atlas for Telecommunications program on our MongoDB for Industries page.

November 13, 2023

AI, Vectors, and the Future of Claims Processing: Why Insurance Needs to Understand The Power of Vector Databases

We’re just under a year since OpenAI released ChatGPT, unleashing a wave of hype, investment, and media frenzy around the potential of generative AI to transform how we do business and interact with the world. But while the majority of the investment dollars and media attention zeroed in on the disruptive capabilities of large language models (LLMs), there’s a crucial component underpinning this breakthrough technology that hasn’t received the attention it deserves; the humble vector database. Vector databases, a type of database that stores numeric representations (or vectors) of your data, allow advanced machine learning algorithms to make sense of unstructured data like images, sound, or unstructured text, and return relevant results. (You can read more about vector search databases and vector search on our Developer Hub .) For industries dealing with vast amounts of data, such as insurance, the potential impact of vector databases and vector search is immense. In this blog, we will focus on how vectors can speed up and increase the accuracy of claim adjustment. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. The claims process… vectorized! The process of claim adjustment is time-consuming and error-prone. As one insurance client recently told us, “If an adjuster touches it, we lose money.” For each claim, adjusters need to go through past claims from the client and related guidelines, which are usually scattered across multiple systems and formats, making it difficult to find relevant information and time-consuming to produce an accurate estimate of what needs to be paid. For this blog, let’s use the example of a car accident claim. In our example, a car just crashed into another vehicle. The driver gets out and starts taking pictures of the damage, uploading them to their car insurance app, where an adjuster receives the photos. Typically, the adjuster would painstakingly comb through past claims and parse guidelines to work up an estimate of the damage and process the claim. But with a vector database, the adjuster can simply ask an AI to “show me images similar to this crash,” and a Vector Search -powered system can return photos of car accidents with similar damage profiles from the claims history database. The adjuster is now able to quickly compare the car accident photos with the most relevant ones in the insurer's claim history. What’s more, with MongoDB it is possible to store vectors as arrays alongside existing fields in a document. In our car crash scenario, this means that our fictional adjuster can not only retrieve the most similar pictures but also have access to complementary information stored in the same database: claim notes, loss amount, car model, car manufacturing year, etc. The adjuster now has a comprehensive view of past accidents and how they were handled by the insurance company, in seconds. For this use case, we have focused on image search, but most data formats can be vectorized, including text and sound. This means that an adjuster could query using claim notes and find similar notes in the claim history or related paragraphs in the guidelines. Vector Search is an extremely powerful tool as it unlocks access to unstructured data that was previously hard to work with such as PDFs, images, or audio files. How does this work in practice? Let’s go through each step of the process: A search index is configured on an existing collection in MongoDB Atlas An image set is sent to an embedding model that generates the image vectors The vectors are then stored in Atlas, alongside the current metadata found in the collection Figure 1: A dataset of photos of past accidents is vectorised and stored in Atlas We run our query against the existing database and Vector Search returns the most similar images Figure 2: An image similarity query is performed, and the 5 top similar images are returned. Example user interface: A claim-adjuster dashboard leveraging Vector Search Figure 3: UI of the claim adjuster application We can go a step further and use our vectors to provide an LLM with the context necessary to generate more reliable and accurate outputs, also known as Retrieval Augmented Output (RAG). These outputs can include: Natural language processing for tasks such as chatbots and question-answering — think of a claim adjuster that interacts with a conversational interface and asks questions such as: “Give me the average of the loss amount for accidents related to one of the photos of claim XYZ” or “Summarize the content of the guidelines related to this accident” Computer vision and audio processing for image classification and object detection to speech recognition and translation Content generation, including creating text-based documentation, reports, and computer code, or converting text to an image or video Figure 4 brings together the workflow enabling RAG for the LLM. Figure 4: Dynamically combining your custom data with the LLM to generate reliable and relevant outputs If you’re interested in seeing how to do this in practice and start prototyping, check out our GitHub repository and dive right in! Go hands-on! Vector databases and vector search will transform how insurers do business. In this blog we have explored how vectors can be leveraged to speed up the work of claim adjusters, which directly translates to an improved customer experience and, crucially, cost savings through faster claims processing and enhanced accuracy. Elsewhere, vector search could be used for: Enhanced customer service. Imagine being able to instantly pull up comprehensive policyholder profiles, their claims history, and any related information with a simple search. Vector search makes this possible, facilitating better interactions and more informed decisions. Personalized Recommendations. As AI-driven personalization becomes the gold standard, vector search aids in accurately matching policyholders with tailor-made insurance products and services that meet their unique needs. Scaled AI Efforts. Scale AI implementations across the organization. From improving customer service chatbots to detecting fraudulent activities, vector-based models can handle tasks more efficiently than traditional methods. Atlas Vector Search goes one step further. By unifying the operational database and vector store in a single platform, MongoDB Atlas turbocharges the process of building semantic search and AI-powered applications, empowering insurers to quickly build applications that take advantage of the value of your vast troves of data. Find out why leading insurers trust MongoDB . Head over to our quick-start guide to get started with Atlas Vector Search today.

October 4, 2023

Finance, Multi-Cloud, and The Elimination of Cloud Concentration Risk

Regardless of their size and business mix, most financial institutions have come to understand how cloud and multi-cloud computing services can benefit them. There is the cost-effective flexibility to scale, deploy new services, and innovate to stay aligned with rapidly changing customer expectations. There are security and resiliency benefits that can be difficult and expensive to replicate on-premises, especially for smaller institutions trying to keep pace with rapidly changing standards. And there is geographic access to new markets – from China to Canada – that require deployment of local, in-country systems under emerging sovereignty laws. As the industry continues to embrace cloud services, regulators are becoming more aware of the challenges associated with cloud computing, especially those that could expose financial institutions to systemic risks potentially undermining the stability of the financial system. Oversight bodies such as the Financial Stability Board (FSB) and the European Banking Authority have urged regulators worldwide to review their supervisory frameworks to ensure that different types of cloud computing activities are fully scoped into industry guidelines. At the same time, public cloud provider outages have disproved the “never fail” paradigm, and there are growing calls for heightened diligence around cybersecurity risks. Regulators are increasingly focused on cloud concentration risk , or the potential peril created when so much of the technology underpinning global financial services relies on so few large cloud services providers. An outage or cyberattack, they worry, could derail the global financial system. This article will tackle cloud concentration risk for financial services firms, examining how that risk came to be and how multi-cloud can be used to navigate this risk and prepare for future regulations. Part 1: What is cloud concentration risk for financial services? Part 2: Why financial services are evolving from hybrid to multi-cloud Part 3: Solve cloud concentration risk with cross-cloud redundancy Part 4: The limits of a single-vendor public cloud solution Part 5: Commercial and technical benefits of multi-cloud for financial services Part 1: What is cloud concentration risk for financial services? The concern over infrastructure concentration and consolidation is twofold. First is the systemic risk of having too many of the world’s banking services concentrated on so few public cloud platforms. Historically, this problem did not exist as each bank operated its own on-premises infrastructure. Failure in a data center was always limited to one single player in the market. Second is the vulnerability of individual institutions, including many smaller institutions, that outsource critical banking infrastructure and services to a few solution providers. These software-as-a-service “hyperscalers” also tend to run on a single cloud platform, creating cascading problems across thousands of institutions in the event of an outage. In both cases, performance, availability, and security-related concerns are motivating regulators who fear that a provider outage, caused either internally or by bad external actors, could cripple the financial systems under their authority. Such a service shock is much more than a hypothetical worry. In October 2021 Facebook suffered a huge global outage. More than 3.5 billion people who rely on the social network’s applications were without service for more than five hours after Facebook made changes to a single server component that coordinates its data center traffic. Like Facebook, the big three cloud service providers (CSPs), Microsoft Azure , AWS , and Google Cloud , have all suffered similar outages in recent years. For financial services companies, the stakes of a service interruption at a single CSP rise exponentially as they begin to run more of their critical functions in the public cloud. Regulators have so far offered financial institutions warnings and guidance rather than enacting new regulations, though they are increasingly focused on ensuring that the industry is considering plans, such as “cloud exit strategies,” to mitigate the risk of service interruptions and their knock-on effects across the financial system. The FSB first raised formal public concern about cloud concentration risk in an advisory published in 2019, and has since sought industry and public input to inform a policy approach. In June 2021, the Monetary Authority of Singapore issued a sweeping advisory on financial institutions’ cybersecurity risks related to cloud adoption. Meanwhile, authorities are exploring expanding regulations, which could mean action as early as 2022. The European Commission has published a legislative proposal on Digital Operational Resilience aimed at harmonizing existing digital governance rules in financial services including testing, information sharing, and information risk management standards. The European Securities & Markets Authority warned in September 2021 of the risks of “high concentration” in cloud computing services providers, suggesting that “requirements may need to be mandated” to ensure resiliency at firms and across the system. Likewise, the Bank of England’s Financial Policy Committee said it believes additional measures are needed “to mitigate the financial stability risks stemming from concentration in the provision of some third-party services.” Those measures could include the designation of certain third-party service providers as “critical,” introducing new oversight to public cloud providers; the establishment of resilience standards; and regular resilience testing. They are also exploring controls over employment and sub-contractors, much like energy and public utility companies do today. Hoping to get out ahead of regulators, the financial services industry and the hyperscalers are taking steps to address the underlying issues. Part 2: Why financial services are evolving from hybrid to multi-cloud Looking at the existing banking ecosystem, a full embrace of the cloud is extremely rare. While they would like to be able to act like challenger and neo banks, many of the largest and most technology-forward established banks and financial services firms have adopted a hybrid cloud architecture – linking on-premises data centers to cloud-based services – as the backbone of an overarching enterprise strategy. Smaller regional and national institutions, while not officially adopting a cloud-centric mindset, are beginning to explore the advantages of cloud services by working with cloud-based SaaS providers through their existing ISVs and systems integrators. Typically, financial institutions already pair multiple external cloud providers with on-premises infrastructure in an enterprise-wide hybrid cloud approach to IT. In these scenarios, some functions get executed in legacy, on-premises data centers and others, such as mobile banking or payment processing, are operated out of cloud environments, giving the benefits of speed and scalability. Moving to a hybrid approach has itself been an evolution. At first, financial institutions put non-core applications in a single public cloud provider to trial its capabilities. These included non-core systems running customer-facing websites and mobile apps, as well as new digital, data, and analytics capabilities. Some pursued deployments on multiple cloud vendors to handle different tasks, while maintaining robust on-premises primary systems, both to pair with public cloud deployments and to power core services. At MongoDB, we’re increasingly seeing customers, including many financial services companies, run independent workloads on different clouds. However, we believe the real power of multi-cloud applications is yet to be realized. While a hybrid approach utilizing one or two separate cloud providers works for now, the next logical step (taken by many fintech startups ) is to fully embrace the cloud and, eventually, a multi-cloud approach and move away from on-premises infrastructure entirely. Take Wells Fargo. The US-based bank recently announced a two-provider cloud infrastructure and data center strategy, adding that its long-term aspirations are to run most of its services in the public cloud, with an end goal of operating agnostically across providers and free of its own data centers. Are you really multi-cloud? Many large financial institutions will say they are already multi-cloud. For most, that means a hybrid cloud approach , using one or more public cloud service providers to handle distinct workloads while maintaining mission critical services on-premises. In a hybrid cloud deployment both public cloud and private, on-premises infrastructure function as a single unit, with orchestration tools used to deploy and manage workloads between the two components. In recent years, the line between the two cloud types has blurred, with significant advances in the strategy known as hybrid multi-cloud; “hybrid” referring to the presence of a private cloud in the mix, and “multi-cloud” indicating more than one public cloud from more than one service provider. As enterprises increasingly move in this direction, the hybrid multi-cloud (also known simply as hybrid cloud) looks to become the predominant IT environment, at least for larger organizations. The hybrid approach can be seen as a step on the way to harnessing the true potential of a multi-cloud deployment , where data and applications are distributed across multiple CSPs simultaneously, giving financial services firms the ability to: Use data from an application running in one cloud and analyze that data on another cloud without manually managing data movement Use data stored in different clouds to power a single application Easily migrate an application from one cloud provider to another With multi-cloud clusters on MongoDB Atlas, data and applications are free to move across multiple clouds with ease. For financial services firms, the multi-cloud journey is one worth serious consideration, both because it holds the potential to increase performance and meet customer expectations, and because it can reduce the risks of relying on one cloud vendor. Part 3: Solve cloud concentration risk with cross-cloud redundancy For an industry as tightly regulated and controlled as financial services, and with so much sensitive data being moved and stored, security and resilience are critical considerations. Recent service disruptions at the top public cloud providers remind us that no matter how many data centers they run, single cloud providers remain vulnerable to weaknesses created by their own network complexity and interconnectivity across sites. One might argue that even a single cloud provider has better uptime stats than an on-premise solution, but recent outages highlight the need for operational agility, given the high availability and performance requirements of critical applications. When an institution relies on a single provider for cloud services, it exposes its business to the risk of potential service shocks originating from that organization’s technical dependencies, cyberattacks, and vulnerabilities to natural disasters or even freak accidents . Cross-cloud redundancy solves cloud concentration risk Cloud disruptions vary in severity, from temporary capacity constraints to full-blown outages, and financial services companies need to mitigate as much risk as possible. By distributing data across multiple clouds, they can improve high availability and application resiliency without sacrificing latency. With multi-cloud clusters on MongoDB Atlas , financial services firms are able to distribute their data in a single cluster across Azure, AWS, and Google Cloud. MongoDB Atlas extends the number of locations available by allowing users to choose from any of over 80 regions available across major CSPs – the widest selection of any cloud database on the market. This is particularly relevant for financial services firms that must comply with data sovereignty requirements , but have limited deployment options due to sparse regional coverage on their primary cloud provider. In some cases, only one in-country region is available, leaving users especially vulnerable to disruptions in cloud service. For example, AWS has only one region in Canada and Google Cloud has two. With multi-cloud clusters, organizations can take advantage of all three regions, and add additional nodes in the Azure Toronto and Quebec City regions for extra fault tolerance. Several MongoDB customers in the financial services sector have already taken steps toward a true multi-cloud approach by building nodes in a second CSP using MongoDB Atlas. These MongoDB customers are using a 5-and-1 architecture, typically with one CSP as the primary, majority provider, coupled with a secondary backup CSP. In this scenario, the primary CSP holds most of the operations the bank or financial institution needs to run a specific solution, e.g. mobile banking, with the second CSP used for disaster recovery and regulatory compliance in case the first provider has a major outage or service interruption. Often this secondary CSP also acts as a primary for other services at the firm. How Bendigo and Adelaide Bank Simplified Their Architecture and Reached for the Cloud Bendigo and Adelaide Bank, one of Australia’s largest banks, are planning for a multi-cloud future. “As we work to accelerate the transformation of our business, we believe the benefits of cloud will help our business systems by reducing disruption, improving velocity and consistency, and enhancing our risk and vulnerability management position,” said Ash Austin, Bendigo and Adelaide Bank’s cloud platforms service owner . For simplification and cloud centricity, MongoDB Atlas , MongoDB’s cloud database service, was a logical next step. “The fact that MongoDB Atlas supported the three major hyperscalers [Google Cloud, AWS, Azure] helped with portability and supports a multi-cloud future for us,” added Dan Corboy, a Cloud Engineer at Bendigo and Adelaide bank. “It made it really easy for us to choose MongoDB because we didn’t have to then hedge our bets on a particular cloud provider or a particular process – we could be flexible.” Part 4: The limits of a single-vendor public-cloud solution In part 1 we explored the evolution of cloud adoption in the financial services sector and the growing attention on infrastructure concentration risk created from hybrid cloud approaches incorporating only one or two isolated or loosely connected public cloud service providers. Beyond the looming regulatory issues, there are a number of practical business and technology limitations of a single-cloud approach that the industry must address to truly future-proof their infrastructure. Drawbacks to a single-cloud or hybrid approach include: Geographic constraints Not all cloud service providers operate in every business region. Choosing a provider that satisfies today’s location needs seems sensible now, but could prove limiting in the future if an organization expands into new geographies that are underserved by their chosen cloud service provider. A multi-cloud strategy extends the geographic availability of data centers to a longer list of countries served by all the major providers. The availability of local cloud solutions grows increasingly important as more countries adopt data sovereignty and residency laws designed to govern how data is collected, stored and used locally. Sovereignty rules mandate that data collected and stored within a country be subject to the laws, regulations and best practices for data collection of that country. Data residency laws require that data about a country’s citizens be collected and stored inside the country, regardless of whether it ultimately gets replicated and sent abroad. For global financial services companies, this creates thorny technical, operational, and legal issues. Addressing those issues holistically through a single cloud provider is nearly impossible. The topic continues to draw the attention of lawmakers around the world, beyond the handful of countries such as Russia and Canada that drove initial action around these policies. The European Union, for one, is actively scoping a unified EU sovereignty policy and action plan to address its growing concerns about control over its data. Following the success of the General Data Protection Regulation , the Digital Markets Act is set to further shape data policy and regulation in the region. Vendor lock-in Aside from the technical risks of working with a single cloud provider, there is also commercial risk in placing all of an institution’s bets on one cloud provider. The more integrated an institution’s applications are within a single cloud provider, and the more it relies on the third-party services of that single provider, the harder it becomes to negotiate the cost of cloud services or to consider switching to another provider. Over time, as services are customized and adapted to a single cloud provider's protocols and data structures, it becomes operationally challenging to migrate to a different cloud environment. The more intertwined a company’s technical architecture is with a single cloud provider, the more difficult it is to design an exit strategy without putting the business at risk of performance lags, heavy “un-customization” work, or price gouging. By locking in, institutions also lose power to influence service quality should the vendor change the focus of its development, become less competitive, or run into operational problems. Eventually, innovation at the financial services firm slows to the speed of the chosen CSP. Even integrating external apps and services becomes a challenge, reminiscent of the monolithic architecture the new cloud environment was set to replace. Multi-cloud and a robust exit strategy In addition to data portability and high availability, multi-cloud clusters on MongoDB Atlas offer financial services companies a robust set of viable exit strategies when moving workloads to the cloud. While other database services lock clients tightly to one cloud provider and provide little to no leeway to quickly terminate a commercial relationship, MongoDB Atlas can transition database workloads, with zero downtime, from one cloud provider to another. An exit can be made without requiring any application changes, bringing peace of mind for financial services companies planning business continuity and cloud exit scenarios in which either a non-stressed or stressed exit from a cloud vendor might be required. Security homogeneity Cloud service providers invest heavily in security features and are generally considered among the most sophisticated leaders in cyber-security. They proactively manage threats through security measures deployed across customer connection points. For financial services, top cloud providers offer enhanced security to meet strict governance requirements. From a risk standpoint, monitoring and securing a single-cloud hybrid deployment is easier than managing threats across multiple clouds. From the perspective of a threat surface, a single cloud poses fewer risks because there are fewer pathways for would-be hackers. The challenge, though, is responding to an event in a single-cloud environment should an incident, intentional or otherwise, occur. In the event of an infrastructure meltdown or cyberattack, a multi-cloud environment can give organizations the ability to switch providers and to back up and protect their data. Feature limitations Cloud service providers develop new features asynchronously. Some excel in specific areas of functionality and constantly innovate, while others focus on a different set of core capabilities, including Google Cloud’s AI Platform, for instance, Microsoft Azure’s Cognitive Services, and the AWS Lambda platform which enables server-less, event-driven computing. By restricting deployments to one cloud services provider, institutions limit their access to best-of-breed features across the cloud. They’re locked in to using whatever is available on their platform, rather than being able to tap in to advances across clouds. Over time, this can limit innovation and put organizations at a competitive disadvantage. Part 5: Commercial and technical benefits of multi-cloud for financial services As the financial services industry accelerates its cloud-first mindset, more institutions find that a multi-cloud strategy can better position them to meet the rapidly changing commercial, technical, and compliance demands on their business. What’s more, a fully-formed multi-cloud strategy provides an opportunity to partner with the most sophisticated and well-resourced service providers, and to benefit from leading-edge innovation from all of them. The recognition that a single cloud provider is not only limiting them but may be a hindrance is dawning to the leadership of many banks. As the CEO of one large investment bank told MongoDB, “Multi-cloud is an opportunity for us to unlock the full value of each location, not water things down with abstractions and accept the lowest common denominator.” In addition to facilitating access to leading-edge innovations, a multi-cloud approach offers financial services firms multiple additional benefits. Optimize performance Rock-solid service availability and responsiveness are the cornerstones of performance planning in financial services. The goal of any architecture design is to limit downtime and minimize application lag while aligning processing resources to the specific needs of each application. While even single cloud providers log higher uptime than most on-prem solutions involving multiple data centers, a multi-cloud architecture offers additional resiliency and flexibility to meet internal and client performance SLAs that before only mainframe technology (so called Sysplex-cluster) could achieve with 99.9999% availability. In a multi-cloud environment, institutions can dynamically shift workloads among cloud providers to speed up tasks, respond to service disruptions, reduce latency by supporting traffic locally, and address regulatory concerns about one-cloud provider vulnerability. Optimizing for all of these factors yields the best customer experience and the most efficient and cost-effective approach to infrastructure. Scale dynamically for task and geography Scalability and locality is critical. Increasingly, customer demands on product experience are pushing financial services providers to meet new requirements that can sometimes be best delivered through geographic scaling and being close to the end user. It’s not just about who has the greatest amount of storage or the fastest CPU available anymore – it may mean maximizing application responsiveness by running computing resources close to the end-user. This is only becoming more relevant with the roll-out of 5G edge services and the growth in real-time edge computing it requires. Access to multiple clouds creates opportunities to dynamically balance task execution locally for maximum efficiency across geographies, be that California, New York, or Singapore. It also enables institutions to scale storage requirements up and down across providers based on need and cost. In a fast-paced commercial environment, financial institutions can quickly deploy applications at scale in the cloud. By running in multiple clouds, financial institutions have the opportunity to arbitrage cost and performance without compromising their business strategy. Adapt to business changes Financial services companies can stay nimble by building flexible multi-cloud capabilities that enable them to adapt quickly to new regulatory, competitive, and financial conditions. This is as true for challenger banks such as Illimity or Current as it is for established institutions such as Macquarie or NETS . An effective multi-cloud strategy can be a solution to managing regulatory, compliance and internal policy changes by replacing a patchwork of solutions with a common framework across cloud providers. The ability to move seamlessly among cloud providers gives institutions the capability to quickly address situations such as new data sovereignty laws or a merger by shifting workloads to a more advantageous provider. Avoid vendor lock-in With IT costs continuing to grow as a proportion of overall spending, running a multi-cloud strategy can help institutions better manage technology outlays to third-party providers by helping them to avoid vendor lock-in. Not all services are designed equally and switching services between providers can have a multi-million dollar impact on cloud provider bills. In any industry, overreliance on one supplier creates financial and operating risks. The more interconnected, or “sticky”, a single-cloud solution becomes, the more challenging it is to unwind it, should it no longer meet the institution’s needs. And by concentrating services with one provider, companies risk losing financial leverage to negotiate contract terms. By taking a multi-cloud approach, institutions can choose among providers competitively, without being locked in, either commercially by a technical dependency. A multi-cloud approach also allows financial institutions to push harder on providers to develop for their particular needs. Harness innovative features The ability to tap into cloud capabilities such as artificial intelligence and machine learning is a major benefit of working with cloud service providers. Through a multi-cloud approach, developers can select features from across cloud providers and deploy the technical building blocks that best suit their needs. They can run their workloads using different tools on the same data set, without having to do manual data replication. That means institutions can access popular services such as AWS Lambda, Google Tensorflow Cloud AI and Azure Cognitive Services without cumbersome data migrations. As consumers increasingly demand premium product experiences from financial services institutions, those institutions can gain competitive advantages by deploying best-of-breed applications into user services. Looking to learn more about how you can build a multicloud strategy, or what MongoDB can do for financial services? Take a look at the following resources: Get started with multi-cloud clusters on MongoDB Atlas How to create a multi-cloud cluster with MongoDB Atlas MongoDB’s financial services hub

February 25, 2022

금융, 멀티클라우드, 그리고 클라우드 집중 리스크의 제거

규모나 비즈니스 믹스와 관계 없이 대부분의 금융기관은 이제 클라우드와 멀티클라우드 컴퓨팅 서비스가 자사에 어떠한 이점을 제공하는지 이해하게 되었다. 우선, 빠르게 변화하는 고객 기대에 지속적으로 부응하기 위해 서비스를 확장하고 새로운 서비스를 배포하고 혁신을 수행할 때 비용 효율이 높은 유연성을 제공한다. 특히 급변하는 기준에 뒤쳐지지 않기 위해 애쓰는 소규모 기관들이 온-프레미스로 구현하기에는 비싸고 어려울 수 있는 보안 및 복원성 이점도 제공한다. 또한 중국에서부터 캐나다에 이르기까지 새로운 주권 법률에 따른 역내 시스템의 배포가 요구되는 새로운 시장에 대한 지리적 접근성도 있다. 금융 시장이 지속적으로 클라우드 서비스를 수용함에 따라 규제기관들은 클라우드 컴퓨팅과 관련된 문제들, 그 중에서도 금융 시스템 안전성을 해칠 수 있는 리스크에 금융기관을 노출시킬 위험이 있는 문제에 대한 인식을 제고하고 있다. 금융안정위원회(FSB)나 유럽은행감독청(EBA)과 같은 감독 기구들은 전 세계 규제기관에 다양한 유형의 클라우드 컴퓨팅 활동이 업계 가이드라인에 충분히 포함될 수 있도록 자사의 감독 체계를 검토할 것을 촉구해 왔다. 동시에 퍼블릭 클라우드 제공자들의 서비스 중단이 발생하면서 “절대로 장애는 없다”는 패러다임이 깨졌고, 이에 따라 사이버보안 리스크에 대한 경계를 강화해야 한다는 목소리가 커지고 있다. 규제기관들은 클라우드 집중 리스크 , 즉 글로벌 금융 서비스의 근간이 되는 기술이 소수의 대형 클라우드 서비스 제공자에만 지나치게 의존할 때 발생할 수 있는 위험에 대한 경각심을 촉구하면서 서비스 중단이나 사이버공격이 글로벌 금융 시스템의 탈선을 야기할 수 있다고 우려한다. 여기에서는 금융 서비스 기업에 대한 클라우드 집중 리스크에 대해 다루면서 이러한 리스크의 현황과 이러한 리스크를 탐색하고 향후 규제를 마련하는 데 멀티클라우드 의 역할이 무엇인지 살펴본다. 1부: 금융 서비스에 대한 클라우드 집중 리스크란 무엇인가? 2부: 금융 서비스가 하이브리드에서 멀티클라우드로 진화하고 있는 이유는 무엇인가? 3부: 크로스 클라우드 이중화를 통해 클라우드 집중 리스크를 해결하라 4부: 단일 벤더 퍼블릭 클라우드 솔루션의 한계 5부: 금융 서비스를 위한 멀티클라우드의 상업적, 기술적 이점** 전문 보기 1부: 금융 서비스에 대한 클라우드 집중 리스크란 무엇인가?? 인프라 집중 및 통합에 대한 우려는 크게 두 가지다. 첫 번째는 전 세계의 너무 많은 뱅킹 서비스가 소수의 퍼블릭 클라우드 플랫폼에 집중되는 리스크다. 과거에는 은행마다 자체 온-프레미스 인프라를 운영했기 때문에 이런 문제가 없었다. 데이터 센터에서 장애가 발생하더라도 해당 기업 한 곳으로 그 영향이 한정되었다. 두 번째는 중요한 뱅킹 인프라 및 서비스를 소수의 솔루션 제공자에게 위탁한 다수의 소규모 기관들을 포함한 개별 기관들의 취약성이다. 이러한 SaaS “하이퍼스케일러들”도 단일 클라우드 플랫폼에서 실행되어 서비스 중단 시 수많은 기관들에 연쇄적으로 문제를 일으키는 경향이 있다. 두 경우 모두 성능, 가용성 및 보안 관련 문제들이 내부적으로나 외부의 악의적 행위자에 의한 서비스 중단으로 자사의 감독 하에 있는 금융 시스템이 손상을 입는 것을 두려워하는 규제기관을 자극하고 있다. 이러한 서비스 충격은 가설적 우려보다 훨씬 더 큰 것이다. 2021년 10월, 페이스북은 엄청난 규모의 글로벌 서비스 중단을 겪었다. 페이스북이 자사의 데이터 센터 트래픽을 조정하는 단일 서버 구성요소 를 변경하고 나서 이 소셜 네트워크의 여러 애플리케이션에 의존하던 35억 명 이상의 사용자들이 5시간 넘게 서비스를 이용할 수 없었다. 페이스북과 마찬가지로 빅3 클라우드 서비스 제공자(CSP)인 Microsoft Azure , AWS , Google Cloud 도 모두 지난 수년간 유사한 서비스 중단을 경험했다. 금융 서비스 기업의 경우 더 많은 자사의 주요 기능들을 퍼블릭 클라우드에서 실행하기 시작하면서 단일 CSP에서 발생한 서비스 중단 리스크가 기하급수적으로 증가한다. 규제기관들은 금융기관들이 서비스 중단 리스크와 이러한 리스크가 금융 시스템 전반에 미치는 연쇄 효과를 완화하기 위한 “클라우드 출구 전략”과 같은 계획을 고려하도록 좀 더 노력을 기울이고 있으나 지금까지는 새로운 규제를 마련하기 보다 주로 금융기관에 경고와 안내를 제공하는 데 주력해 왔다. FSB는 2019년에 발표한 권고서에서 클라우드 집중 리스크에 관한 공식적인 우려 를 처음으로 제기한 이래 줄곧 정책 수립에 필요한 업계와 일반 대중의 의견을 들어왔다. 2021년 6월에는 싱가포르 금융관리국(MAS)이 클라우드 채택과 관련한 금융기관의 사이버공격 리스크에 관한 포괄적인 권고문 을 발행했다. 한편, 규제기관들은 이르면 2022년부터 시행될 규제 확대를 검토 중이다. 유럽 위원회는 검사, 정보 공유 및 정보 리스크 관리 기준을 포함한 기존의 금융 서비스 디지털 거버넌스 규칙의 국가 간 조화를 목표로 디지털 운용 탄력성(Digital Operational Resilience) 에 관한 입법 제안을 발표했다. 유럽증권시장감독청(ESMA)은 2021년 9월에 클라우드 컴퓨팅 서비스 제공자의 “과밀” 리스크를 경고하면서 기업과 시스템 전체의 탄력성을 위한 요구사항 의무화의 필요성을 제안했다. 마찬가지로 잉글랜드은행(Bank of England)의 금융정책위원회도 “일부 제3자 서비스 제공에 집중되는 데 따른 금융 안정성 리스크를 완화하기 위해” 추가적인 조치가 필요할 것으로 판단한다고 밝혔다. 이러한 조치로는 특정 제3자 서비스 제공자의 “중대” 지정을 통한 퍼블릭 클라우드 제공자에 대한 새로운 감독 기능 도입, 탄력성 기준 수립, 정기적인 탄력성 검사 등이 있을 수 있다. 또한 현재 에너지 및 공익 기업들이 시행하는 것과 같이 고용과 하청업체에 대한 관리도 모색하는 중이다. 규제기관보다 앞서 가기 위해 금융 서비스 기업들과 하이퍼스케일러들은 이러한 근본적인 문제들을 해결하기 위한 조치를 취하는 중이다. 2부: 금융 서비스가 하이브리드에서 멀티클라우드로 진화하고 있는 이유는 무엇인가? 기존의 뱅킹 생태계를 보면 클라우드를 전면적으로 수용하는 경우는 극히 드물다. 도전자나 네오뱅크처럼 행동하고 싶어하는 기술 지향적인 대형 은행 및 금융 서비스 기업들은 온-프레미스 데이터 센터를 클라우드 기반 서비스와 연동시키는 하이브리드 클라우드 아키텍처를 중요한 기업 전략의 근간으로 채택했다. 이들보다는 규모가 작은 지역 및 전국 금융기관들은 클라우드 중심적 마인드셋을 공식적으로 채택하지는 않았으나 자사의 기존 ISV 및 시스템 통합 사업자들을 통해 클라우드 기반 SaaS 제공자들과 협업함으로써 클라우드 서비스의 이점을 찾아보기 시작했다. 일반적으로 금융기관들은 이미 다수의 외부 클라우드 제공자를 온-프레미스 인프라와 IT에 대한 전사적 하이브리드 클라우드 접근법으로 페어링한다. 이러한 시나리오에서는 레거시, 온-프레미스 데이터 센터 등에서 실행되는 모바일 뱅킹이나 결제 처리와 같은 일부 기능들은 클라우드 환경 외부에서 운영되어 속도 및 확장성의 이점을 제공한다. 하이브리드 접근법으로의 이동은 그 자체로 큰 발전이었다. 금융기관들은 먼저 비핵심 애플리케이션들을 단일 퍼블릭 클라우드 제공자에게 투입하여 성능을 시험했다. 여기에는 고객이 이용하는 웹사이트 및 모바일 앱을 실행하는 비핵심 시스템과 새로운 디지털, 데이터 및 분석 기능 등이 포함되었다. 일부는 다양한 작업을 처리하기 위해 다수의 클라우드 벤더에서의 배포를 모색하는 한편 퍼블릭 클라우드 배포와 페어링하고 핵심 서비스를 운영하기 위한 탄탄하고 우수한 온-프레미스 기본 시스템을 유지했다. MongoDB는 여러 금융 서비스 기업을 포함한 고객들이 갈수록 더 많이 다양한 클라우드에서 독립적인 워크로드를 실행하는 것을 목격하고 있다. 그러나 우리는 멀티클라우드 애플리케이션의 진정한 힘은 아직 실현되지 않았다고 믿는다. 한두 곳의 개별 클라우드 제공자를 이용하는 하이브리드 접근법이 현재로서는 효과가 있으나 ( 다수의 핀테크 스타트업 이 채택) 논리적으로 타당한 다음 단계는 클라우드와 궁극적으로는 멀티클라우드 접근법을 전면적으로 수용하고 온-프레미스 인프라와 영구적으로 결별하는 것이 될 것이다. 웰스파고(Wells Fargo)를 예로 들어보자. 미국에 기반을 둔 이 은행은 최근 두 곳의 클라우드 서비스 제공자를 통한 클라우드 인프라 및 데이터 센터 전략을 발표 하면서 대부분의 자사 서비스를 퍼블릭 클라우드에서 실행하고 궁극적으로는 제공자를 가리지 않고 여러 제공자를 통해 자사의 자체 데이터 센터와 무관하게 운영하는 방향으로 간다는 장기적 목표를 제시했다. 진정한 의미의 멀티클라우드인가? 많은 대형 금융기관들이 이미 멀티클라우드를 운용 중이라고 말하지만 대부분에게 중요한 서비스는 온-프레미스에서 유지하면서 하나 이상의 퍼블릭 클라우드 서비스 제공자를 이용해 별개의 워크로드를 처리하는 하이브리드 클라우드 접근법을 의미 한다. 퍼블릭과 프라이빗 클라우드 모두 하이브리드 클라우드 배포에서 온-프레미스 인프라는 단일 유닛으로 기능하며, 이 두 구성요소 간 워크로드 배포 및 관리는 오케스트레이션 툴을 통해 이루어진다. 최근 수년간 이 두 클라우드 유형 간 경계는 모호해졌으며, 하이브리드 멀티클라우드라고 알려진 전략이 크게 발전했다. “하이브리드”는 프라이빗 클라우드가 포함된 것을 의미하고, “멀티클라우드”는 다수의 서비스 제공자로부터 제공되는 다수의 퍼블릭 클라우드를 나타낸다. 기업들이 점점 더 이러한 방향으로 이동함에 따라 하이브리드 멀티클라우드(또는 간단히 하이브리드 클라우드)는 적어도 대규모 조직들에 있어서는 지배적인 IT 환경이 되어가는 것으로 보인다. 하이브리드 접근법은 데이터와 애플리케이션이 동시에 다수의 CSP에 배포되어 금융 서비스 기업들에게 다음과 같은 이점을 제공하는 멀티클라우드 배포의 진정한 잠재력 을 활용하기 위한 과정 중 하나로 여겨질 수 있다. 데이터 이동을 수동으로 관리할 필요 없이 하나의 클라우드에서 실행되는 애플리케이션으로부터 데이터를 이용하고 또 다른 클라우드에서 해당 데이터를 분석 서로 다른 클라우드에 저장된 데이터를 이용하여 단일 애플리케이션을 운영 하나의 클라우드 제공자로부터 또 다른 제공자로 애플리케이션을 손쉽게 마이그레이션 MongoDB Atlas에서의 멀티클라우드 클러스터를 통해 데이터와 애플리케이션을 다수의 클라우드 사이에서 손쉽고 자유롭게 이동시킬 수 있다. 금융 서비스 기업에게 멀티클라우드를 향한 여정은 성능을 개선하고 고객 기대를 만족할 수 있는 잠재력이 있고 단일 클라우드 벤더에 대한 의존도를 낮출 수 있다는 점에서 진지하게 고려해 볼 가치가 있다. 3부: 크로스 클라우드 이중화를 통해 클라우드 집중 리스크를 해결하라 금융 서비스업과 같이 엄격한 규제와 관리가 적용되고 중요한 데이터의 이동 및 저장이 빈번한 분야에서는 보안과 복원력이 매우 중요하다. 최근 유수의 퍼블릭 클라우드 제공자들에서 발생한 서비스 중단 사태를 보면서 이들이 아무리 많은 데이터 센터를 운영하더라도 단일 클라우드 제공자들은 자사의 네트워크 복잡성과 사이트 간 상호연결성에서 기인하는 취약점에서 자유로울 수 없음을 새삼 깨닫게 된다. 혹자는 설령 하나의 단일 클라우드 제공자라 해도 온-프레미스 솔루션보다는 더 나은 업타임 통계를 보여준다고 주장할 수도 있겠으나 최근의 서비스 중단 사태를 보면 중요한 애플리케이션들의 고가용성 및 성능 요건을 고려할 때 운영 민첩성에 대한 필요성을 강조하지 않을 수 없다. 단일 제공자에 의존하여 클라우드 서비스를 이용하는 기관은 자사의 기술적 의존성, 사이버공격, 자연 재해에 대한 취약성 또는 심지어 황당한 사고 등에서 기인하는 잠재적 서비스 충격의 리스크에 노출될 수밖에 없다. 크로스 클라우드 이중화를 통해 클라우드 집중 리스크를 해결하라 클라우드 서비스 중단은 일시적인 용량 제약에서부터 전체 서비스 중단에 이르기까지 그 정도가 다양하며, 금융 서비스 기업들은 가능한 한 많은 리스크를 줄일 필요가 있다. 데이터를 다수의 클라우드에 배포함으로써 지연시간은 그대로 유지한 채 고가용성 과 애플리케이션 복원성을 개선할 수 있다. MongoDB Atlas에서의 멀티클라우드 클러스터 를 통해 금융 서비스 기업들은 Azure, AWS, Google Cloud에 걸친 단일 클러스터에 자사의 데이터를 배포할 수 있다. MongoDB Atlas는 주요 CSP를 통해 클라우드 데이터베이스 중 가장 많은 80개 이상의 지역을 지원하는 등 지역 선택의 폭을 넓혔다. 이는 데이터 주권 요구사항 을 준수해야 하면서도 자사의 기본 클라우드 제공자의 밀도가 낮은 지역 커버리지 때문에 제한된 배포 옵션만을 이용할 수밖에 없는 금융 서비스 기업에 특히 적합하다. 일부의 경우 겨우 하나의 국내 지역만 이용이 가능하기 때문에 사용자들이 클라우드 서비스 중단에 특히 취약할 수밖에 없다. 예를 들어, AWS는 캐나다에 하나의 지역만 있고, Google Cloud도 두 지역밖에 없다. 멀티클라우드 클러스터의 경우 조직들은 세 지역을 모두 활용할 수 있고, Azure Toronto 및 Quebec City 지역에서 추가적인 노드를 통해 더 높은 내고장성을 얻을 수 있다. 금융 서비스 분야의 몇몇 MongoDB 고객은 이미 MongoDB Atlas를 통해 두 번째 CSP에 노드를 구축함으로써 진정한 멀티클라우드 접근법으로 나아가고 있다. 이 고객들은 주로 하나의 기본 CSP와 하나의 보조 백업 CSP로 조합을 이룬 5&1 아키텍처를 이용하고 있다. 이 시나리오에서 기본 CSP는 해당 은행 또는 금융기관이 모바일 뱅킹과 같은 특정 솔루션을 실행하기 위해 필요로 하는 작업의 대부분을 담당하고 보조 CSP는 기본 제공자가 서비스 중단을 겪는 경우 재해 복구 및 컴플라이언스를 목적으로 사용된다. 동일한 기업의 다른 서비스에 대해서는 이러한 보조 CSP가 기본 CSP의 역할을 하는 경우도 많다. Bendigo and Adelaide Bank는 어떻게 자사의 아키텍처를 간소화하고 클라우드를 채택했나 호주의 대형 은행 중 하나인 Bendigo and Adelaide Bank는 멀티클라우드 미래를 준비 중이다.  Bendigo and Adelaide Bank의 클라우드 플랫폼 서비스 책임자 Ash Austin은 “비즈니스 혁신을 가속화하기 위해 노력하는 과정에서 클라우드의 다양한 이점들이 서비스 중단을 줄이고, 속도와 일관성을 제고하고, 리스크 및 취약점 관리를 강화하는 데 도움이 될 것이라고 생각한다”고 말했다. 간소화와 클라우드 중심성을 위해서는 MongoDB의 클라우드 데이터베이스 서비스인  MongoDB Atlas 가 타당한 다음 단계였다. Bendigo and Adelaide Bank의 클라우드 엔지니어 Dan Corboy는 “MongoDB Atlas가 세 주요 하이퍼스케일러(Google Cloud, AWS, Azure)를 모두 지원한다는 점은 이식성과 우리가 생각하는 멀티클라우드 미래에 도움이 된다”고 덧붙였다. “특정 클라우드 제공자나 특정 프로세스에만 의존할 필요 없이 유연하게 대처할 수 있음을 의미했기 때문에 우리는 망설임 없이 MongoDB를 선택할 수 있었다.” 4부: 단일 벤더 퍼블릭 클라우드 솔루션의 한계 1부에서 우리는 금융 서비스 기업들의 클라우드 채택이 어떻게 변화해 왔는지 살펴보고, 한두 곳의 분리되거나 느슨하게 연결된 퍼블릭 클라우드 서비스 제공자만을 포함하는 하이브리드 클라우드 접근법에서 발생하는 인프라 집중 리스크에 대한 증가하는 우려에 대해서도 살펴보았다. 곧 닥칠 규제 문제는 차치하고, 기업들이 자사 인프라의 미래를 제대로 대비하기 위해서 반드시 해결해야 할 단일 클라우드 접근법의 몇 가지 비즈니스적 기술적 한계가 있다. 단일 클라우드 또는 하이브리드 접근법은 다음과 같은 약점을 가진다. 지리적 제약 모든 클라우드 서비스 제공자가 모든 비즈니스 지역에서 서비스를 제공하는 것은 아니다. 고객이 원하는 지역에서 서비스를 제공하는 제공자를 선택하는 것은 그 당시에는 합리적으로 보일 수 있으나 향후 해당 고객이 해당 서비스 제공자가 제대로 커버할 수 없는 지역으로 진출할 경우 문제가 될 수 있다. 멀티클라우드 전략은 데이터 센터의 지리적 가용성을 모든 주요 클라우드 서비스 제공자가 서비스를 제공하는 수많은 국가로 확장시켜 준다. 해당 지역에서 데이터가 수집되고 저장되고 이용되는 방법을 통제하기 위한 데이터 주권 및 거주 법률을 채택하는 국가가 늘어나면서 지역 클라우드 솔루션의 가용성이 갈수록 중요해지고 있다. 주권 규칙은 한 국가에서 수집되고 저장된 데이터는 해당 국가의 데이터 수집에 대한 법규 및 베스트 프랙티스를 따르도록 의무화한다. 데이터 거주 법률은 한 국가의 국민에 관한 데이터는 비록 해당 데이터가 최종적으로는 복제되어 해외로 전송되더라도 해당 국가 내에서 수집되고 저장되도록 요구한다. 이러한 요구사항은 글로벌 금융 서비스 기업들에게 골치 아픈 기술적, 운영적, 법적 문제다. 단일 클라우드 제공자를 통해 이러한 문제들을 전체적으로 다루는 것은 거의 불가능하다. 이 문제는 관련 정책들을 조기에 추진한 러시아나 캐나다와 같은 일부 국가의 국회의원들뿐만 아니라 지속적으로 전 세계 국회의원들의 관심을 끌고 있다. 유럽연합은 역내 데이터 통제에 관한 증가하는 우려를 해결하기 위해 통합 EU 주권 정책과 실행 계획을 적극적으로 검토 중이다. 일반 데이터 보호 규칙(General Data Protection Regulation) 의 성공 이후 역내 데이터 정책 및 규정을 더 가다듬기 위해 디지털 시장법(Digital Markets Act) 도 제정했다. 벤더 락인(lock-in) 단일 클라우드 제공자 이용의 기술적 리스크 외에도 기관의 모든 서비스를 단일 클라우드 제공자를 통해 실행하는 것에 따른 상업적 리스크도 고려해야 한다. 기관의 애플리케이션들이 단일 클라우드 제공자에 집중되고 해당 제공자의 제3자 서비스에 대한 의존도가 높아질 수록 해당 기관은 클라우드 서비스 비용을 협상하거나 다른 제공자로의 변경을 고려하는 것이 더 어려워진다. 시간이 경과하면서 서비스들이 단일 클라우드 제공자의 프로토콜과 데이터 구조에 적응하기 때문에 다른 클라우드 환경으로 이전하기가 운영상 더 힘들어진다. 기업의 기술적 아키텍처가 하나의 단일 클라우드 제공자와 엮이면 엮일수록 성능 지연, 과중한 커스터마이징 해제 작업 또는 비합리적인 가격 상승과 같은 리스크 없이 출구 전략을 계획하기가 더욱 힘들어진다. 이러한 락인(lock-in) 상태에 있는 기관들은 해당 벤더가 개발 초점을 변경하거나 경쟁력을 잃거나 운영 문제에 봉착할 경우 서비스 품질에 대한 영향력까지 잃게 된다. 결국 금융 서비스 기업이 추진하는 혁신은 해당 CSP의 속도로 감속된다. 심지어 외부 앱 및 서비스의 통합조차 어려워지면서 이 새로운 클라우드 환경이 대체했던 기존의 단일 아키텍처로 돌아간 느낌마저 들게 된다. 멀티클라우드와 탄탄한 출구 전략 데이터 이식성 및 고가용성 외에도 MongoDB Atlas에서의 멀티클라우드 클러스터 는 워크로드를 클라우드로 옮길 때 필요한 탄탄하고 실행 가능한 출구 전략을 금융 서비스 기업들에게 제공한다. 다른 데이터베이스 서비스들은 단일 클라우드 제공자에게 고객을 락인시켜 상업적 관계를 빠르게 정리할 재량을 거의 주지 않는 반면, MongoDB Atlas는 데이터베이스 워크로드를 하나의 클라우드 제공자에서 다른 제공자로 다운타임 없이 이전할 수 있다. 이러한 출구 전략은 애플리케이션을 변경할 필요가 전혀 없기 때문에 비즈니스 연속성, 그리고 클라우드 벤더로부터의 탈출이 쉽거나 어려울 수 있는 클라우드 출구 시나리오를 계획하는 금융 서비스 기업들을 안심시킬 수 있다. 보안 균일성 클라우드 서비스 제공자들은 보안 기능에 상당한 투자를 하고 있으며, 일반적으로 사이버보안 분야의 리더로 인식된다. 이들은 모든 고객 연결점에 배포된 보안 기능을 통해 위협을 선제적으로 관리한다. 금융 서비스의 경우 주요 클라우드 제공자들은 엄격한 거버넌스 요구사항을 만족할 수 있는 향상된 보안을 제공한다. 리스크 관점에서 볼 때 단일 클라우드 하이브리드 배포를 모니터링하고 보호하는 것은 다수의 클라우드에서 위협을 관리하는 것보다 더 쉽다. 위협면의 관점에서 보면 단일 클라우드는 해커들이 노릴 수 있는 경로가 많지 않기 때문에 리스크가 줄어든다. 그러나 문제는 의도와 상관없이 어떠한 사고가 발생했을 때 단일 클라우드 환경에서 이벤트에 대응하는 것이다. 인프라 붕괴 또는 사이버공격 발생 시 멀티클라우드 환경은 조직에게 서비스 제공자를 변경하고 자사 데이터를 백업 및 보호할 수 있는 능력을 제공한다. 기능적 한계 클라우드 서비스 제공자는 새로운 기능들을 비동기적으로 개발한다. 일부는 특정 기능 영역에서 뛰어나며 지속적으로 혁신하는 반면, 다른 제공자들은 예컨대 Google Cloud의 AI 플랫폼이나 Azure의 Cognitive Services, 서버리스 이벤트 기반 컴퓨팅을 지원하는 AWS의 Lambda 플랫폼과 같이 서로 다른 핵심 기능 집합에 집중하기도 한다. 하나의 클라우드 서비스 제공자로 배포를 제한함으로써 기관은 전체 클라우드 중에서 동종 최고의 기능들을 쓸 수 없게 된다. 여러 클라우드 중에서 가장 좋은 기능을 이용하는 대신 해당 플랫폼 내에서 이용 가능한 기능만 사용할 수 밖에 없다. 시간이 경과하면서 이는 혁신을 제한하고 조직은 자사의 경쟁 우위를 잃을 수 있다. 5부: 금융 서비스를 위한 멀티클라우드의 상업적, 기술적 이점 금융 서비스 산업이 클라우드 퍼스트 마인드셋을 가속화함에 따라 더 많은 금융기관이 멀티클라우드 전략을 통해 빠르게 변화하는 상업적, 기술적 및 컴플라이언스 요구에 보다 효과적으로 대응할 수 있음을 깨닫고 있다. 뿐만 아니라 완전하게 갖춰진 멀티클라우드 전략은 가장 기술적으로 발전하고 자원이 풍부한 서비스 제공자들과 협업하고 이를 통한 최고의 혁신으로부터 이점을 얻을 기회를 제공한다. 다수의 은행 경영진 사이에서는 단일 클라우드 제공자가 단순히 제한 요인일 뿐만 아니라 저해 요인이 될 수도 있다는 인식이 늘고 있다. 한 대형 투자은행의 CEO는 MongoDB에게 “멀티클라우드는 각 지역의 가치를 최대한 발휘할 기회를 제공하는 것이지 추상화로 사물을 희석하고 최소공통분모를 수용하는 것이 아니다”라고 말했다. 첨단 혁신에 대한 접근을 용이하게 하는 것 외에도 멀티클라우드 접근법은 금융 서비스 기업에게 여러 추가적인 이점을 제공한다. 성능 최적화 견고한 서비스 가용성 및 응답성은 금융 서비스 성능 계획의 초석이다. 모든 아키텍처 설계의 목표는 다운타임을 제한하고 애플리케이션 지연을 최소화하고 동시에 각 애플리케이션의 특정 니즈에 맞춰 처리 리소스를 조정하는 것이다. 단일 클라우드 제공자만 해도 다수의 데이터 센터가 포함된 대부분의 온-프레미스 솔루션보다 더 높은 업타임을 기록하고 있으며, 멀티클라우드 아키텍처는 추가적인 복원성과 유연성을 제공하여 99.9999%의 가용성으로 이전에는 메인프레임 기술(이른바 시스플렉스 클러스터)만이 달성하던 내부 및 외부의 성능 SLA를 만족하도록 지원한다. 멀티클라우드 환경에서 기관들은 클라우드 제공자들 사이에서 워크로드를 동적으로 이동시켜 작업 속도를 향상시키고 서비스 중단에 대응하고 로컬로 트래픽을 지원하여 지연시간을 줄이고 단일 클라우드 제공자가 가진 취약점에 관한 규제 문제를 해결할 수 있다. 이 모든 요소들을 최적화하면 최고의 고객 경험을 창출하고 인프라에 대한 가장 효율적이고 경제적인 접근법을 얻을 수 있다. 작업과 지역에 따라 동적으로 확장하고 축소하라 확장성과 지역성은 매우 중요하다. 제품 경험에 대한 고객 요구가 갈수록 증가하면서 금융 서비스 제공자들은 새로운 요구사항을 만족해야 하는 부담이 늘고 있으며, 이러한 문제는 때로는 지역 확장을 통해 사용자와 가까워지는 방법으로 효과적으로 해결할 수 있다. 이는 누가 가장 큰 저장 용량 또는 가장 빠른 CPU를 보유하고 있는지의 문제일 뿐만 아니라 사용자와 보다 가까운 곳에서 컴퓨팅 리소스를 실행함으로써 애플리케이션 응답성을 극대화하는 문제이기도 하다. 이러한 문제는 5G 에지 서비스의 등장 과 그에 따른 실시간 에지 컴퓨팅의 성장으로 더욱 중요해지고 있다. 멀티클라우드는 예컨대 캘리포니아, 뉴욕, 싱가포르 등 여러 지역에서 효율을 극대화하기 위해 작업 실행에 대한 동적 밸런싱을 로컬로 실행할 기회를 제공한다. 또한 멀티클라우드를 채택한 기관은 수요와 비용에 따라 여러 제공자들 사이에서 스토리지 요구사항을 확장하고 축소할 수 있다. 빠르게 변화하는 비즈니스 환경에서 금융기관들은 애플리케이션을 필요한 규모로 빠르게 클라우드에 배포할 수 있다. 또한 자사의 비즈니스 전략은 그대로 유지하면서 여러 클라우드 사이에서 비용과 성능을 옮기며 유리하게 조정할 수 있다. 비즈니스 변화에 적응 금융 서비스 기업들은 새로운 규제, 경쟁 및 금융 환경에 빠르게 적응할 수 있도록 유연한 멀티클라우드 기능을 구축함으로써 민첩함을 유지할 수 있다. 이는 Macquarie 나 NETS 와 같은 대형 은행뿐만 아니라 Illimity 나 Current 와 같은 도전자들에게도 마찬가지다. 효과적인 멀티클라우드 전략은 여러 솔루션들의 조각 모음을 클라우드 제공자 간 공통 프레임워크로 대체함으로써 규제, 컴플라이언스 및 내부 정책의 변화를 관리하기 위한 해결책이 될 수 있다. 서로 다른 클라우드 제공자 사이를 원활하게 이동할 수 있기 때문에 기관들은 새로운 데이터 주권 법률의 제정이나 합병과 같은 상황이 발생하더라도 워크로드를 보다 유리한 제공자에게 옮김으로써 빠르게 대처할 수 있다. 벤더 락인을 피하라 전체 지출에서 IT 비용의 비중이 지속적으로 증가하고 있다. 멀티클라우드 전략을 실행하는 금융기관들은 벤더 락인을 피함으로써 제3자 제공자에게 나가는 기술 비용을 보다 잘 관리할 수 있다. 모든 서비스가 똑같이 설계되는 것이 아닌 만큼 제공자들 사이에서 서비스를 변경함으로써 수백만 달러의 클라우드 비용을 절감할 수 있을 것이다. 어떤 분야에서든 단일 공급자에 대한 과도한 의존은 금전적, 운영적 리스크를 야기한다. 단일 클라우드 솔루션과 더 “끈끈하게” 얽힐 수록 해당 솔루션이 고객의 니즈를 더 이상 만족할 수 없을 때 그 얽힘을 푸는 일은 더 어렵기 마련이다. 서비스를 단일 제공자에 집중시킴으로써 기업은 해당 제공자와의 계약 협상에서도 재무 레버리지를 잃는 리스크가 생긴다. 멀티클라우드 접근법을 취함으로써 기관은 상업적 또는 기술적 의존성에 따른 락인의 위험 없이 경쟁하는 여러 제공자들 사이에서 선택이 가능한 위치에 설 수 있다. 멀티클라우드 접근법을 채택한 금융기관들은 제공자들에게 자사의 특정 니즈에 맞는 개발을 진행하도록 압력을 행사할 수도 있다. 혁신 기능을 이용하라 인공지능이나 머신러닝과 같은 클라우드 기능은 클라우드 서비스 제공자와의 협업에서 얻는 주요 이점이다. 멀티클라우드 접근법을 통해 개발자들은 여러 클라우드 제공자들이 제공하는 기능들을 선택하고 자사의 니즈에 가장 부합하는 기술적 빌딩 블록을 배포할 수 있다. 개발자들은 수동으로 데이터를 복제할 필요 없이 동일한 데이터셋에서 다양한 툴을 이용하여 워크로드를 실행할 수 있다. 즉, 기관들은 성가신 데이터 마이그레이션 과정 없이 AWS Lambda, Google Tensorflow Cloud AI, Azure Cognitive Services와 같은 유명한 서비스를 이용할 수 있다. 갈수록 높은 수준의 제품 경험을 요구하는 소비자가 늘면서 금융 서비스 기관들은 동급 최고의 애플리케이션을 사용자 서비스에 배포하여 경쟁 우위를 얻을 수 있다. 멀티클라우드 전략 구축 방법 또는 금융 서비스 분야에서의 MongoDB의 이점에 대해 더 많이 알고 싶으십니까? 다음 링크를 확인해 보시기 바랍니다. MongoDB Atlas에서 멀티클라우드 클러스터 시작하기 MongoDB Atlas로 멀티클라우드 클러스터를 생성하는 방법 MongoDB의 금융 서비스 허브

February 25, 2022