Silvio Sola

4 results

MACH Aligned for Retail: Cloud-Native SaaS

MongoDB is an active member of the MACH Alliance , a non-profit cooperation of technology companies fostering the adoption of composable architecture principles promoting agility and innovation. Each letter in the MACH acronym corresponds to a different concept that should be leveraged when modernizing heritage solutions and creating brand-new experiences. MACH stands for Microservices, API-first, Cloud-native SaaS, and Headless. In previous articles in this series, we explored the importance of Microservices and the API-first approach. Here, we will focus on the third principle championed by the alliance: Cloud-native SaaS. Let’s dive in. What is cloud-native SaaS? Cloud-native SaaS solutions are vendor-managed applications developed in and for the cloud, and leveraging all the capabilities the cloud has to offer, such as fully managed hosting, built-in security, auto-scaling, cross-regional deployment, automatic updates, built-in analytics, and more. Why is cloud-native SaaS important for retail? Retailers are pressed to transform their digital offerings to meet rapidly shifting consumer needs and remain competitive. Traditionally, this means establishing areas of improvement for your systems and instructing your development teams to refactor components to introduce new capabilities (e.g., analytics engines for personalization or mobile app support) or to streamline architectures to make them easier to maintain (e.g., moving from monolith to microservices). These approaches can yield good results but require a substantial investment in time, budget, and internal technical knowledge to implement. Now, retailers have an alternative tool at their disposal: Cloud-native SaaS applications. These solutions are readily available off-the-shelf and require minimal configuration and development effort. Adopting them as part of your technology stack can accelerate the transformation and time to market of new features, while not requiring specific in-house technical expertise. Many cloud-native SaaS solutions focused on retail use cases are available (see Figure 1), including Vue Storefront , which provides a front-end presentation layer for ecommerce, and Amplience , which enables retailers to customize their digital experiences. Figure 1: Some MACH Alliance members providing retail solutions. At the same time, in-house development should not be totally discarded, and you should aim to strike the right balance between the two options based on your objectives. Figure 2 shows pros and cons of the two approaches: Figure 2: Pros and cons of cloud-native SaaS and in-house approaches. MongoDB is a great fit for cloud-native SaaS applications MongoDB’s product suite is cloud-native by design and is a great fit if your organization is adopting this principle, whether you prefer to run your database on-premises, leveraging MongoDB Community and Enterprise Advanced , or as SaaS with MongoDB Atlas . MongoDB Atlas, our developer data platform, is particularly suitable in this context. It supports the three major cloud providers (AWS, GCP, Azure) and leverages the cloud platforms’ features to achieve cloud-native principles and design: Auto-deployment & auto-healing: DB clusters are provisioned, set up, and healed automatically, reducing operational and DBA efforts. Automatically scalable: Built-in auto-scaling capabilities enable the database RAM, CPU, and storage to scale up or down depending on traffic and data volume. A MongoDB Serverless instance allows abstracting the infrastructure even further, by paying only for the resources you need. Globally distributed: The global nature of the retail industry requires data to be efficiently distributed to ensure high availability and compliance with data privacy regulations, such as GDPR , while implementing strict privacy controls. MongoDB Atlas leverages the flexibility of the cloud with its replica set architecture and multi-cloud support, meaning that data can be easily distributed to meet complex requirements Secure from the start: Network isolation, encryption, and granular auditing capabilities ensure data is only accessible to authorized individuals, thereby maintaining confidentiality. Always up to date: Security patches and minor upgrades are performed automatically with no intervention required from your team. Major releases can be integrated effortlessly, without modifying the underlying OS or working with package files. Monitorable and reliable: MongoDB Atlas distributes a set of utilities that provides real-time reporting of database activities to monitor and improve slow queries, visualize data traffic, and more. Backups are also fully managed, ensuring data integrity. Independent Software Vendors (ISVs) increasingly rely on capabilities like these to build cloud-native SaaS applications addressing retail use cases. For example, Commercetools offers a fully managed ecommerce platform underpinned by MongoDB Atlas (see Figure 3). Their end-to-end solution provides retailers with the tools to transform their ecommerce capabilities in a matter of days, instead of building a solution in-house. Commercetools is also a MACH Alliance member, fully embracing composable architecture paradigms explored in this series. Adopting Commercetools as your ecommerce platform of choice lets you automatically scale your ecommerce as traffic increases, and it integrates with many third-party systems, ranging from payment platforms to front-end solutions. Additionally, its headless nature and strong API layer allow your front-end to be adapted based on your brands, currencies, and geographies. Commercetools runs on and natively ingests data from MongoDB. Leveraging MongoDB for your other home-grown applications means that you can standardize your data estate, while taking advantage of the many capabilities that the MongoDB data platform has to offer. The same principles can be applied to other SaaS solutions running on MongoDB. Figure 3: MongoDB Atlas and Commercetools capabilities. Find out more about the MongoDB partnership with Commercetools . Learn how Commercetools enabled Audi to integrate its in-car commerce solution and adapt it to 26 countries . MongoDB supports your home-grown applications MongoDB offers a powerful developer data platform, providing the tools to leverage composable architecture patterns and build differentiating experiences in-house. The same benefits of MongoDB’s cloud-native architecture explored earlier are also applicable in this context and are leveraged by many retailers globally, such as Conrad Electronics, running their B2B ecommerce platform on MongoDB Atlas . Summary Cloud-native principles are an essential component of modern systems and applications. They support ISVs in developing powerful SaaS applications and can be leveraged to build proprietary systems in-house. In both scenarios, MongoDB is strongly positioned to deliver on the cloud-native capabilities that should be expected from a modern data platform. Thank you to Ainhoa Múgica and Karolina Ruiz Rogelj for their contributions to this post. Stay tuned for our final blog of this series on Headless and check out our previous blogs on Microservices and API-first .

September 22, 2022

Digital Underwriting: Riding the Insurance Transformation Wave With MongoDB

In our previous article about digital underwriting, “ A Digital Transformation Wave in Insurance ,” we covered the main challenges insurers face when it comes to streamlining and modernizing their underwriting processes, along with key areas that can be improved by leveraging the power of data and artificial intelligence. We analyzed how modern IT trends require a complete redesign of manual underwriting processes to enable insurers to leverage new market opportunities and stay relevant in an ever-changing risk landscape. We explored how the full underwriting workflow — from the intake of new cases to risk assessment and pricing — can be redesigned to ease the burden on underwriting teams and enable them to focus on what matters most. In this second article, we’ll expand on how new technology paradigms can support transformation initiatives in this space and describe the pivotal role MongoDB plays in disrupting the industry. The importance of data and new technology paradigms For digital underwriting transformation initiatives to succeed, organizations must move away from monolithic applications, where data is siloed and functionality is fragmented across different technologies. However, as many organizations have additionally come to realize, lifting and shifting these monolithic applications to the cloud does not automatically bring them closer to achieving their digital objectives. Organizations that are successful in their transformation efforts are increasingly adopting MACH architecture principles to modernize their application stacks. The acronym stands for Microservices, API-first, Cloud-based, and Headless, and, combined, those principles enable developers to leverage best-of-breed technology and build services that can be used across multiple different business workflows and applications. These principles allow software delivery teams to reduce the time it takes to deliver new business features and promote significant reuse and flexibility far beyond the monolithic applications that pre-date them. From an insurance perspective, this approach enables underwriting systems to be decoupled into business and capability domains, each working independently, yet sharing data as part of an event-driven design and microservices architecture. Often overlooked, shared capability domains can provide significant value to an organization's business domains, as seen in the visual below. Figure 1.   Key business and capability domains. Each function of the application should be owned by the team holding expertise in that particular domain and be loosely coupled with the others. Services can communicate with each other via APIs, as well as listen for and consume one another's events. Building a domain-based data modernization strategy can also enable a phased migration away from legacy systems. This allows for immediate realization of the organization's digital objectives, without first engaging in a costly and timely legacy system replacement effort. An event-driven, and API-enabled architecture allows for real-time data processing, a core component of digital enablement. Figure 2.   Microservices and event-driven architecture. Read the previous post in this series, " A Digital Transformation Wave in Insurance ." Decision support services Once monolithic systems are decomposed into finer-grained domains and services and begin interacting via APIs and events, it is possible to focus on the most crucial component that brings all of them together — the decision support domain. Its role is to streamline and, where possible, automate underwriting and other decision-making processes that traditionally require heavy administrative and manual work in order to reduce operational expenses and enable critical underwriting staff to focus on highest priority work. Effective underwriting processes require pulling together multiple teams and capability domains (e.g., claim, customer, pricing, billing, and so forth) to be able to reach a decision on whether to insure a new customer or define an adequate pricing and coverage model, among other factors. A decision support engine has the power to fully automate those steps by automatically triggering workflows based on specific events (e.g., a new claim is submitted in the system) as part of the event-driven design referenced earlier to enable real-time decision making. Why MongoDB With the added burden of integrating and working with various sources of data — from APIs to events to legacy databases — and doing so in real time, software delivery teams need a developer data platform that allows them to tame complexity, not increase it. Refactoring systems that have been around for decades is not an easy feat and typically results in multi-year transformation initiatives. MongoDB provides insurers with the same ACID capabilities of relational databases, while introducing new tools and flexibility to ease transformation by increasing developer productivity and fully supporting the MACH principles. The MongoDB application data model MongoDB provides a developer data platform leveraged by some of the world’s largest insurers. It possesses key capabilities that allow it to: Integrate legacy siloed data into a new single view. The flexibility of the document model enables the integration of separate, legacy data stores into an elegant, single-view data model that reduces rather than increases complexity. Without the complexities of another canonical, relational model, application development and data migration efforts are dramatically simplified, and delivery timelines shortened. Manage the full lifecycle of containerized applications. MongoDB’s Enterprise Operator for Kubernetes lets you deploy and manage the full lifecycle of applications and MongoDB clusters from your Kubernetes environment for a consistent experience regardless of an on-premises, hybrid, or public cloud topology. Automate workflows, leveraging events in real-time. MongoDB provides the data persistence at the heart of event-driven architectures with connectors and tools that make it easy to move data between systems (e.g., MongoDB Connector for Apache Kafka ), providing a clear separation between automated underwriting workflows and those requiring manual intervention. Enable business agility using DevOps methodologies. MongoDB Atlas , the global cloud database for MongoDB, provides users with quick access to fully managed and automated databases. This approach allows development teams to add new microservices and make changes to application components much more quickly. It also saves a substantial amount of operations effort, since database administrators are not required in every sprint to make and manage changes. Work quickly with complex data. Developers can analyze many types of data directly within the database, using the MongoDB Aggregation Pipeline framework. And, with the power of Atlas Federation, developers can do this without the need to move data across systems and complex data warehouse platforms, providing real-time analytics capabilities that underwriting algorithms require. MongoDB offers a flexible developer data platform that maps to how developers think and code, while allowing data governance when needed. It is strongly consistent and comes with full support for ACID transactions. Figure 3.   The MongoDB developer data platform. The MongoDB developer data platform addresses a range of use cases without added complexity, including full-text search, support for storing data at the edge on mobile, data lake, charts, and the ability to deliver real-time analytics without moving data between systems. It also provides developers with a powerful yet simplified query interface suitable for a variety of workloads, enabling polymorphism and idiomatic access. Thank you to Ainhoa Múgica and Karolina Ruiz Rogelj for their contributions to this post. Learn how the MongoDB developer data platform can help you streamline your insurance business .

August 25, 2022

MACH Aligned for Retail: API-First

>> Announcement: Some features mentioned below will be deprecated on Sep. 30, 2025. Learn more . Retailers must constantly evolve to meet growing customer expectations and remain competitive. Both their internal- and external-facing applications must be developed using principles that promote agility and innovation, moving away from siloed architectures. As discussed in the first article of this series , the MACH Alliance promotes the development of modern applications through open tech ecosystems. MACH is an acronym that represents Microservices, API-first, Cloud-native SaaS, and Headless. MongoDB is a proud member of the Alliance, providing retailers with the tools to build highly flexible and scalable applications. This is the second in a series of blog posts focused on MACH and how retail organizations can leverage this framework to gain a competitive advantage. In this article, we’ll discuss concepts relating to the second letter of MACH: API-first. Read the first post in this series, "MACH Aligned for Retail: Microservices." What is an API-first approach and why is it important? An application programming interface (API) is a set of routines, protocols, and tools that allow applications, or services within a microservices architecture, to talk to each other. APIs can be seen as messengers that deliver requests and responses. Applications built around APIs are said to be API-first. With this approach, the design and development of APIs come before the software implementation. Typically, an interface is created that is used to host and develop the API. The development team will then leverage the interface to build the rest of the application. This methodology enables developers to have access to specific functionalities of external applications or other microservices within the same application, depending on their needs. It promotes reusability because functionalities are interoperable with mobile and other client applications. In addition, applications developed with an API layer in mind can adapt to new requirements more easily because additional services and automation can be integrated into production when new requirements arise, therefore remaining competitive for longer. An API-first approach to developing applications The role of API-first in retail APIs play a crucial role in deeply interconnected systems that need to interface with other internal applications, third-party partners, and customers — all key areas when it comes to developing powerful retail applications. Think about how an e-commerce platform connects to the different systems making up the purchase process, such as inventory management, checkout, payment processing, shipping, and loyalty programs. The use of APIs is deeply interlinked with the concept of microservices . Software and data need to be decoupled to enable retailers to meet ever-increasing requirements, including omnichannel and cross-platform integration, seamless experiences across physical and online stores, and the ability to leverage real-time capabilities that enable differentiating features, such as live inventory updates and real-time analytics. APIs can be seen as a bridge for loosely coupled microservices to communicate with each other. Besides enabling a microservices architecture, an API-first approach offers the following additional benefits: Avoid duplication of efforts and accelerate time to market . Developers can work on multiple frontends at the same time, being confident that functionalities can be integrated by embedding the same APIs once ready. Think of multiple development teams working on an e-commerce web application, mobile portal, and internal inventory management system all at the same time. An API enabling the placement of a new order can be seamlessly leveraged by the web and mobile application and fed into the inventory management system to aid warehouse workers. Bug-fixing and feature enhancements can happen simultaneously, avoiding duplication of efforts and allowing new capabilities to be released to market more quickly. Reduce risks and operating costs . An API-first approach enables system stability and interoperability from the beginning because API efficiency is placed at the center of the development lifecycle and is no longer an afterthought once the application or functionality has been developed. This approach reduces the risk for retailers and saves money and effort in troubleshooting unstable systems. Enable new opportunities and scale faster . A flexible approach revolving around APIs provides more opportunities when it comes to integrating and refactoring the way different client applications and microservices communicate with each other, allowing retailers to improve and scale their IT offering in a fraction of the time. This approach also changes the way retailers can interact with external partners and do business with them since they can be provided with the tools to easily integrate with the retailer’s offering. Achieve language flexibility . Effective retailers need to have the capability to adapt their digital offering to different regions and languages. The plug-in capabilities of API-first allow developers to offer language-agnostic solutions that different microservices can integrate with, leveraging region-specific frontends. Steps to an API-first application What is the alternative? The four MACH Alliance principles combined (Microservices, API-first, Cloud-native SaaS, Headless) act as a disrupting force compared to the way applications were built until recently. Adapting to a new technology paradigm requires effort and a different developer mindset. But what was there before? From an API-first perspective, it can be said that the opposite is code-first. With this approach, application development starts in the integrated development environment (IDE), in which code is written and the software takes shape. Development teams know that they will need to build an interface to be able to interact with each function of the code, but it is seldom a priority; developing core functionalities takes precedence over the interface where those functionalities will be hosted and accessed. When the time comes for the interface to be developed, the code has already been defined. This means the API is developed around existing code rather than vice versa, which poses limitations. For example, developers might not be able to return data the way they want because of the underlying data schema. The code-first approach Bottlenecks can also occur as other teams requiring the API will need to wait until the code is finalized to be able to embed it in their underlying applications. Any delays in the software development lifecycle will hold them up and delay progress. Although a code-first approach might have worked in the past, it is no longer suitable for dealing with highly interconnected applications. Learn more about how MongoDB and MACH are changing the game for ecommerce. How MongoDB helps achieve an API-first approach Simply lifting and shifting monolithic applications to a microservice and API-first architecture will only provide minimal benefits if they are still supported by a relational data layer. This is where most of the bottlenecks occur. Changes to application functionalities will require constant refactoring of the database schemas, object-relational mapping (ORM), and refining at the microservice level. Moving to a modern MACH architecture requires a modern data platform that removes data silos. The MongoDB developer data platform provides a flexible data model, along with automation and scalability features to adapt to even the most challenging retail use cases and to multiple platforms (e.g., on-premises, cloud, mobile, and web applications). MongoDB Atlas, MongoDB’s fully managed cloud database, also provides capabilities to manage the data layer end to end via APIs, such as the MongoDB Atlas Data API . This is a REST-like, resilient API for accessing all Atlas data that enables CRUD operations and aggregations with instantly generated endpoints. This is a perfect answer to an API-first approach, since developers can access their data using the same principles leveraged to connect to other applications and services. The MongoDB Atlas Data API workflow MongoDB’s Atlas Data API provides several other benefits, allowing developers to: Build faster with developer-friendly data access. Developers work with a familiar, REST-like query and response format, no client-side drivers are necessary. Scale confidently with a resilient, fully managed API that reduces the operational complexity needed to start reading and writing your data. Integrate your MongoDB Atlas data seamlessly into any part of your stack — from microservices to analytics workloads. This article has provided only a sample of what can be leveraged via MongoDB’s APIs. The MongoDB Query API provides a comprehensive set of features to seamlessly work with data in a native, familiar way. It supports multiple index types, geospatial data, materialized views, full-text search, and much more. In the next part in this MongoDB and MACH Alliance series, we will discuss how a cloud-native SaaS architecture can enable full application flexibility and scalability. Read the first post in this series, "MACH Aligned for Retail: Microservices."

June 24, 2022

Digital Underwriting: A Digital Transformation Wave in Insurance

Underwriting processes are at the core of insurance companies, and their effectiveness is directly related to insurers’ profitability and success. Despite this fact, underwriting is often one of the most underserved parts of the insurance industry from a technology perspective. There may be sophisticated policy, customer, and claim administration systems, but underwriters often find themselves wrangling data from a variety of sources, into spreadsheets, in order to adequately evaluate the financial risks that new applicants and scenarios might bring, and translate them into appropriate pricing and coverage decisions. Due to the complexity and variety of information and sources required to be accessed and integrated, modernized underwriting platforms have often been a difficult objective to achieve for many insurers. The cost and time associated with building such systems, and the possibility of minimal short-term return on investment, have also made it difficult for leaders to secure funding and support within their organizations. These factors have required underwriters to persist manual processes, which, at best, are often highly inefficient. At worst, they do not sufficiently position an insurer to be competitive in the digitally disrupted future of insurance delivery. It does not have to be this way, however. This blog post highlights ways in which insurance companies can leverage new technology, and incorporate modern architecture paradigms into their information systems, in order to revolutionize their underwriting workflows. The underwriting revolution Technology is changing the way organizations operate and measure risk. New technological advancements in the IoT, Manufacturing, and Automotive space, just to mention a few, are driving insurers to develop new underwriting paradigms personalized to each individual, and adjusted based on real-time data. This is already a reality, with some insurers leveraging personal wearable technology to assess the fitness level of clients and adjust life and health insurance premiums accordingly. We are only at the beginning; let’s explore what this might look like in 2030. Imagine a scenario , where a professional, living in a major urban area, orders a self-driving car through his digital assistant to get to a meeting. The assistant is directly linked to the user’s insurer, which allows the insurer to automatically calculate the best possible route taking into account the time required, past accident history, and current traffic conditions so that the likelihood of car damage and accidents is minimized. If the user decides to drive him or herself that day or picks a different route, the mobility premium will be set to increase based on real-time variables of the journey. The user’s mobility insurance can be linked to other services, such as a life insurance policy, which can also be subject to increase depending on the commute’s risk factors. We don’t have to wait for 2030, for a scenario like this to come to fruition. Thanks to advances in IoT devices, mobile computing, and deep learning techniques mimicking the human brain's perception, reasoning, learning, and problem-solving, many of these capabilities can be made a reality here in 2022. While the insurance industry continues to innovate, the underwriting process is under constant evolution as a result. Certainly, in the scenario described above, the Underwriting decision-making process has shifted from a spreadsheet-based, manual one, to one that is fully automated, with AI/ML decision support. The insurers who can achieve this will retain and gain a significant competitive advantage over the next decade. Technology can help streamline new cases Underwriters are notoriously faced with administrative complexity when managing any new case, regardless of the risk profile or level. In the commercial insurance space, agents and brokers are generally used as a bridge between the insurer and the insured. Email exchanges amongst parties are common, which can often lack sufficient detail, and require the underwriter to chase missing data in order to successfully close the sale and acquisition of new business. Issues with data quality, or lack of certain key pieces of information, can be addressed by implementing automated claim procedures leveraging Natural Language Processing (NLP), Optical Character Recognition (OCR), and rich text analysis to programmatically extract data from email and other forms of written communication, alert the agent in case of missing information, and even attempt to automatically enrich missing information in order to facilitate a close of the sale. What’s described above is only the beginning of what’s possible to achieve when we begin to think about what we can do to bolster and augment underwriting procedures within an insurer. Sanding off the rough edges by reducing manual procedures, and helping underwriters focus less on non-differentiating work, and more on high-value activities, can not only alleviate significant pain and frustration of the underwriter, but it can help grow the book of business, by offering more competitive pricing, products, and turn-around times. Triaging times can be drastically reduced Insurance providers seeking to grow their book of business, and expand the channels through which they sell, may have to deal with a surge of new coverage requests and changing risk scenarios. However, many insurers may be unprepared to handle such increases in new business intake volumes. Because of legacy systems, workflow, and resource bottlenecks, it’s possible that a significant uptick in new business could actually result in a negative outcome for the insurer, due to the inability to process it in a timely and efficient manner. Could you lose business to a competitor because it could not be underwritten in time? Augmenting traditional workflows with automation and Machine Learning algorithms can begin to address this challenge. How can you do more, without significantly burdening or expanding your underwriting team? Many insurers are beginning to automatically classify and route such increases in business demand, using AI/ML. A first step in the underwriting process, after initial intake and enrichment, is triaging, or deciding who can best underwrite the given request. Often, this is also a manual process, relying heavily on someone within the organization who knows how to best route the flow of work, based on the skills and experience of the underwriting staff. As with the ability to detect the need for, and enrich the initial submission intake, Machine Learning algorithms can also be leveraged to ease the burden, and reduce the human bottleneck of routing the intake work to the best suited underwriter. Risk assessment processes can be made more effective Once the intake of new cases has been automated and triaged, we need to think about how to streamline the risk assessment process. Does every single new business case need to be priced and adjusted by an actual underwriter? If we can triage and determine who should work on the new case, can we also then route some of the low-risk work to a fully-automated pricing and underwriting workflow? Can we begin to save the precious time of our underwriting staff for the higher-touch business and accounts that truly need their attention and expertise? Automated risk assessment has roots in rule-based expert systems dating back to the 1990s. These systems contained tens of thousands of hard-coded underwriting rules that could assess medical, occupational, and advocational risk. These systems became very complex over the years and still play an essential role in underwriting. ML algorithms can enhance the performance of these systems by fine-tuning underwriting rules and finding new patterns of risk information. The vast amount of data available to insurers can also be used to predict the risk of new cases and scenarios. Once the risk profile of a new case has been established, a pricing model can be applied to programmatically derive the policy cost and communicate it to the prospective client without involving the underwriting team, as imagined in the 2030 scenario we mentioned earlier in the article. Conclusion and follow-up There are plenty of digital transformation opportunities in the insurance industry. More specifically, focusing on underwriting will help new and existing players in the insurance industry gain a significant competitive advantage in the coming decade. Whether human-based or AI/ML augmented, underwriting decisions will be underpinned by an ever-growing variety and volume of complex data. In the next blog of the series, Riding the Transformation Wave with MongoDB , we’ll dive deeper into how MongoDB helps insurance innovators create, transform and disrupt the industry by unleashing the power of software and data. Stay tuned! Learn how MongoDB is helping insurance innovators create, transform, and disrupt the industry by unleashing the power of software and data.

June 2, 2022