Steve Jurczak

63 results

Built With MongoDB: Buzzy Makes AI Application Development More Accessible

AI adoption rates are sky-high and showing no signs of slowing down. One of the driving forces behind this explosive growth is the increasing popularity of low- and no-code development tools that make this transformative technology more accessible to tech novices. Buzzy , an AI-powered no-code platform that aims to revolutionize how applications are created, is one such company. Buzzy enables anyone to transform an idea into a fully functional, scalable web or mobile application in minutes. Buzzy developers use the platform for a wide range of use cases, from a stock portfolio tracker to an AI t-shirt store. The only way the platform could support such diverse applications is by being built upon a uniquely versatile data architecture. So it’s no surprise that the company chose MongoDB Atlas as its underlying database. Creating the buzz Buzzy’s mission is simple but powerful: to democratize the creation of applications by making the process accessible to everyone, regardless of technical expertise. Founder Adam Ginsburg—a self-described husband, father, surfer, geek, and serial entrepreneur—spent years building solutions for other businesses. After building and selling an application that eventually became the IBM Web Content Manager, he created a platform allowing anyone to build custom applications quickly and easily. Buzzy initially focused on white-label technology for B2B applications, which global vendors brought to market. Over time, the platform evolved into something much bigger. The traditional method of developing software, as Ginsburg puts it, is dead. Ginsburg observed two major trends that contributed to this shift: the rise of artificial intelligence (AI) and the design-centric approach to product development exemplified by tools like Figma. Buzzy set out to address two major problems. First, traditional software development is often slow and costly. Small-to-medium-sized business (SMB) projects can take anywhere from $50,000 to $250,000 and nine months to complete. Due to these high costs and lengthy timelines, many projects either fail to start or run out of resources before they’re finished. The second issue is that while AI has revolutionized many aspects of development, it isn’t a cure-all for generating vast amounts of code. Generating tens of thousands of lines of code using AI is not only unreliable but also lacks the security and robustness that enterprise applications demand. Additionally, the code generated by AI often can’t be maintained or supported effectively by IT teams. This is where Buzzy found a way to harness AI effectively, using it in a co-pilot mode to create maintainable, scalable applications. Buzzy’s original vision was focused on improving communication and collaboration through custom applications. Over time, the platform’s mission shifted toward no-code development, recognizing that these custom apps were key drivers of collaboration and business effectiveness. The Buzzy UX is highly streamlined so even non-technical users can leverage the power of AI in their apps. Initially, Buzzy's offerings were somewhat rudimentary, producing functional but unpolished B2B apps. However, the platform soon evolved. Instead of building their own user experience (UX) and user interface (UI) capabilities, Buzzy integrated with Figma, giving users access to the design-centric workflow they were already familiar with. The advent of large language models (LLMs) provided another boost to the platform, enabling Buzzy to accelerate AI-powered development. What sets Buzzy apart is its unique approach to building applications. Unlike traditional development, where code and application logic are often intertwined, Buzzy separates the "app definition" from the "core code." This distinction allows for significant benefits, including scalability, maintainability, and better integration with AI. Instead of handing massive chunks of code to an AI system—which can result in errors and inefficiencies—Buzzy gives the AI a concise, consumable description of the application, making it easier to work with. Meanwhile, the core code, written and maintained by humans, remains robust, secure, and high-performing. This approach not only simplifies AI integration but also ensures that updates made to Buzzy’s core code benefit all customers simultaneously, an efficiency that few traditional development teams can achieve. Flexible platform, fruitful partnership The partnership between Buzzy and MongoDB has been crucial to Buzzy’s success. MongoDB’s Atlas developer data platform provides a scalable, cost-effective solution that supports Buzzy’s technical needs across various applications. One of the standout features of MongoDB Atlas is its flexibility and scalability, which allows Buzzy to customize schemas to suit the diverse range of applications the platform supports. Additionally, MongoDB’s support—particularly with new features like Atlas Vector Search —has allowed Buzzy to grow and adapt without complicating its architecture. In terms of technology, Buzzy’s stack is built for flexibility and performance. The platform uses Kubernetes and Docker running on Node.js with MongoDB as the database. Native clients are powered by React Native, using SQLLite and Websockets for communication with the server. On the AI side, Buzzy leverages several models, with OpenAI as the primary engine for fine-tuning its AI capabilities. Thanks to the MongoDB for Startups program , Buzzy has received critical support, including Atlas credits, consulting, and technical guidance, helping the startup continue to grow and scale. With the continued support of MongoDB and an innovative approach to no-code development, Buzzy is well-positioned to remain at the forefront of the AI-driven application development revolution. A Buzzy future Buzzy embodies the spirit of innovation in its own software development lifecycle (SDLC). The company is about to release two game-changing features that are going to take AI driven App development to the next level: Buzzy FlexiBuild, which will allow users to build more complex applications using just AI prompts, and Buzzy Automarkup, which will allow Figma users to easily mark up screens, views, lists, forms, and actions with AI in minutes. Ready to start bringing your own app visions to life? Try Buzzy and start building your application in minutes for Free. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start guide .

October 18, 2024

Built With MongoDB: Atlas Helps Team-GPT Launch in Two Weeks

Team-GPT enables teams large and small to collaborate on AI projects. When OpenAI released GPT-4, it turned out to be a game-changer for the startup. Founded in 2023, the company has been helping people train machine learning (ML) models, in particular natural language processing (NLP) models. But when OpenAI launched GPT-4 in March 2023, the team was blown away by how much progress had been made on large language models (LLMs). So Team-GPT dropped everything they were doing and started experimenting with it. Many of those early ideas are still memorialized on a whiteboard in one of the office's meeting rooms: The birth of an idea. Like many startups, Team-GPT began with a brainstorm on a whiteboard. Evolving the application Of all the ideas they batted around, there was one issue in particular the team wanted to solve—the need for a shared workspace where they could experiment with LLMs together. What they found was that having to work with LLMs in the terminal was a major point of friction. Plus, there weren't any sharing abilities. So they set out to create a UI consisting of chat sharing, in-chat team collaboration, folders and subfolders, and a prompt library. The whole thing came together in an incredibly short period of time. This was due, in large part, to their initial choice of MongoDB Atlas, which allowed them to build with speed and scalability. "MongoDB made it possible for us to launch in just two weeks," said Team-GPT Founder and CTO, Ilko Kacharov. "With the MongoDB Atlas cloud platform, we were able to move rapidly, focusing our efforts on developing innovative product features rather than dealing with the complexities of infrastructure management." Before long, the team realized there was a lot more that could be built around LLMs than simply chat, and set out to add more advanced capabilities. Today, users can integrate any LLM of their choice and add custom instructions. The platform also supports multimodality like ChatGPT Vision and DALL-E. Users use any GPT model to turn chat responses into a standalone document that can then be edited. All these improvements are meant to unify teams' AI workflows in a single, AI-powered tool. A platform built for developers Diving deeper into more technical aspects of the solution, Team-GPT CEO Iliya Valchanov acknowledges the virtues of the document data model, which underpins the Atlas developer data platform. "We wanted the ability to quickly update and create new collections, add more data, and expand the existing database setup without major hurdles or time consumption," he said. "That's something that relational databases often struggle with." A developer data platform consists of integrated data infrastructure components and services for quick deployment. With transactional, analytical, search, and stream processing capabilities, it supports various use cases, reduces complexity, and accelerates development. Valchanov's team leverages a few key elements of the platform to address a range of application needs. "We benefited from Atlas Triggers , which allow automatic execution of specified database operations," he said. "This greatly simplified many of our routine tasks." It's not easy to build truly differentiated applications without a friction-free developer experience. Valchanov cites Atlas' user-friendly UI as a key advantage for a startup where time is of the essence. And he said that Atlas Charts has been instrumental for the team, who use it every day, even their less technical people. Of course one of the biggest reasons why developers and tech leaders choose MongoDB, and why so many are moving away from relational databases, is its ability to scale—which Valchanov said is one of the most critical requirements for supporting the company's growth. "With MongoDB handling the scaling aspect, we were able to focus our attention entirely on building the best possible features for our customers." Team-GPT deployment options Accelerating AI transformation Team-GPT is a collaborative platform that allows teams of up to 20,000 people to use AI in their work. It's designed to help teams learn, collaborate, and master AI in a shared workspace. The platform is used by over 2,000 high-performing businesses worldwide, including EY, Charles Schwab, Johns Hopkins University, Yale University, and Columbia University, all of which are also MongoDB customers. The company's goal is to empower every person who works on a computer to use AI in a productive and safe manner. Valchanov fully appreciates the rapid change that accompanies a product's explosive growth. "We never imagined that we would eventually grow to provide our service to over 40,000 users," he said. "As a startup, our primary focus when selecting a data platform was flexibility and the speed of iteration. As we transitioned from a small-scale tool to a product used by tens of thousands, MongoDB's attributes like flexibility, agility, and scalability became necessary for us." Another key enabler of Team-GPT's explosive growth has been the MongoDB for Startups program . It offers valuable resources such as free Atlas credits, technical guidance, co-marketing opportunities, and access to a network of partners. Valchanov makes no secret of how instrumental the program has been for his company's success. "The startup program made it free! It offered us enough credits to build out the MVP and cater to all our needs," he said. "Beyond financial aid, the program opened doors for us to learn and network. For instance, my co-founder, Yavor Belakov, and I participated in a MongoDB hackathon in MongoDB's office in San Francisco." Team-GPT co-founders Yavor Belakov (l) and Iliya Valchanov (r) participated in a MongoDB hackathon at the San Francisco office Professional services engagements are an essential part of the program, especially for early-stage startups. "The program offered technical sessions and consultations with MongoDB staff, which enriched our knowledge and understanding, especially for Atlas Vector Search , aiding our growth as a startup," said Valchanov. The roadmap ahead for the company includes the release of Team-GPT 2.0, which will introduce a brand-new user interface and new, robust functionalities. The company encourages anyone looking to learn more or join their efforts to ease adoption of AI innovations to reach out on LinkedIn . Are you part of a startup and interested in joining the MongoDB for Startups program? Apply to the program now . For more startup content, check out our Built With MongoDB blog collection.

August 15, 2024

发挥 GenAI 的强大功能需要考虑的 4 个主要因素

人工智能正以前所未有的速度发展,而生成式 AI (GenAI) 处于这场变革的前沿。GenAI 拥有广泛的功能,涵盖文本生成以及音乐和艺术创作。但是,GenAI 的真正独特之处在于它能够深入理解上下文,生成的输出与人类输出极为相似。它不仅仅是与智能聊天机器人对话。GenAI 拥有改变行业的潜力,可提供更丰富的用户体验并解锁新的可能性。 在接下来的数月和数年内,我们将见证那些利用 GenAI 蕴藏的强大能力的应用程序崭露头角,这些应用程序可以提供前所未有的各种功能。与现在广受欢迎的聊天机器人(如 ChatGPT)不同,用户不一定会发现 GenAI 正在后台工作。但在后台,这些新应用程序将结合使用信息检索和文本生成,以实时提供真正个性化且与上下文有关的用户体验。此过程被称为检索增强生成,或简称 RAG。 那么,检索增强生成 (RAG) 是如何运作的,以及数据库在此过程中发挥着什么作用?下面我们将更深入地探讨 GenAI 领域及其数据库要求。 请查看我们的 AI 资源页面,来详细了解如何使用 MongoDB 构建采用 AI 技术的应用程序。 训练 AI 基础模型所面临的挑战 GenAI 面临的主要挑战之一是无法访问私有或专有数据。AI 基础模型( 大型语言模型 (LLM) 是其子集)通常针对公开可用的数据进行训练,但无法访问机密或专有信息。即使这些数据位于公共域中,也有可能已经过时且不太相关。LLM 在识别最近的事件或很新的知识方面也存在局限性。而且,如果没有适当的指导,LLM 可能会生成错误的信息,这在大多数情况下均不可接受。 数据库在应对这些挑战方面发挥着重要作用。应用程序可以使用数据库来检索相关数据并将其作为上下文包含在提示中,而不是直接向 LLM 发送提示。例如,银行应用程序可以从传统数据库中查询用户的交易数据,将该数据添加到提示中,然后将这一经过设计的提示发送至 LLM。此方法可确保 LLM 生成准确的最新回复,消除了数据丢失、过时以及不准确的问题。 针对 GenAI 应用程序选择数据库时需考虑的 4 个因素 当所有人都可以访问相同的工具和知识库时,企业想要利用 GenAI 实现真正的竞争优势就没那么容易了。相反,实现差异化的关键来自于在由基础模型和 LLM 提供支持的生成式 AI 的基础之上, 对您自己的独特专有数据进行分层 。在选择数据库以充分发挥由 GenAI 提供支持的应用程序的潜力时,组织应重点考虑四个主要因素: 可查询性:数据库需要能够支持丰富的表达式查询和二级索引,以提供实时、上下文感知的用户体验。无论查询的复杂性或数据库中存储的数据大小如何,此功能均可确保在数毫秒内完成数据检索。 灵活的数据模型:GenAI 应用程序通常需要不同类型和格式的数据,称为多模式数据。为了适应这些不断变化的数据集,数据库应具有灵活的数据模型,支持轻松载入新数据,而无需更改模式、修改代码或发布版本。对于关系数据库而言,使用多模式数据可能具有挑战性,因为关系数据库是设计用于按照严格的模式规则来处理结构化数据,其中的信息会被整理到包含行和列的表中。 集成向量搜索:GenAI 应用程序可能需要针对不同类型的数据(如自由形式文本、音频或图像)执行语义查询或相似性查询。向量数据库中的向量嵌入支持语义查询或相似性查询。向量嵌入会捕获数据的语义含义和上下文信息,使其适合用于处理各种任务,如文本分类、机器翻译和情感分析。数据库应提供集成的向量搜索索引,让两个单独的系统保持同步变得简单,并确保开发者使用统一的查询语言。 可扩展性:由于 GenAI 应用程序的用户群和数据规模在增长,数据库必须能够动态地横向扩展,以支持不断增长的数据量和请求率。对横向扩展分片提供原生支持可确保数据库存在的限制不会阻碍业务增长。 理想的数据库解决方案:MongoDB Atlas MongoDB Atlas 是一个功能强大的多用途平台,用于处理 GenAI 的独特需求。MongoDB 使用的强大查询 API 可轻松处理多模式数据,让开发者能够用更少的代码交付更多功能。MongoDB 被开发者评为最受欢迎的文档型数据库。对开发者而言,使用文档既简单又直观,因为文档会映射到面向对象的编程中的对象,与关系数据库中数不尽的行和表相比,开发者更熟悉前者。灵活的模式设计考虑到了数据模型的不断发展以满足 GenAI 用例的需求,这些用例本身就是多模式。通过使用分片,Atlas 可以横向扩展以支持由 GenAI 提供支持的应用程序所导致的数据量和请求量的大幅增长。 MongoDB Atlas Vector Search 能够以原生方式嵌入向量搜索索引,因此无需维护两个不同的系统。Atlas 会不断地使用源数据确保 Vector Search 索引为最新状态。开发者可以使用单个端点和查询语言来构建将常规数据库查询过滤器与向量搜索过滤器结合使用的查询。这样可以消除摩擦,为开发者提供了快速制作原型并交付 GenAI 解决方案的环境。 结语 GenAI 已准备好重塑行业,并为各个行业提供创新的解决方案。借助合适的数据库解决方案,GenAI 应用程序可以蓬勃发展,提供准确、上下文感知和动态的数据驱动型用户体验,来满足当今快节奏的数字环境日益增长的需求。使用 MongoDB Atlas,组织可以在敏捷性、生产力和业务增长方面释放潜能,在快速发展的生成式 AI 领域提供竞争优势。 要了解有关 Atlas 如何帮助组织集成并处理 GenAI 和 LLM 数据的更多信息,请下载我们的白皮书《 借助 MongoDB 将生成式 AI 和高级搜索嵌入到您的应用程序中 》。如果您有兴趣在贵组织中利用生成式 AI,请立即 联系我们 ,了解我们如何帮助您实现数字化转型。

February 8, 2024

생성형 AI의 위력을 활용하기 위한 4가지 주요 고려 사항

인공지능(Artificial intelligence, AI)는 전례 없는 속도로 진화하고 있으며 생성형 AI(Generative AI, GenAI)는 이 거대한 변화의 선두에 있습니다. GenAI의 기능은 텍스트 작성에서 음악과 예술 창작에 이르기까지 매우 광범위합니다. 하지만, GenAI의 가장 독특한 점은 상황(context)을 분명하게 이해하고, 인간과 매우 유사한 결과물을 만들어낸다는 것입니다. 이는 지능형 챗봇과의 대화만이 아닙니다. GenAI는 여러 산업을 획기적으로 변화시킬 수 있는 잠재력을 가지고 있으며, 보다 풍부한 사용자 경험을 제공하고 새로운 가능성을 열어줍니다. 앞으로 몇 개월, 몇 년 내에, GenAI의 위력을 활용하는 애플리케이션이 등장할 것이며, 이전에 볼 수 없었던 기능들을 제공할 것입니다. 널리 인기있는 챗봇인 챗GPT(ChatGPT)와 달리, 사용자들은 GenAI가 그 뒤에서 실행되고 있다는 사실을 반드시 인식하지는 않을 것입니다. 하지만, 그 뒤에서 이들 새로운 애플리케이션은 정보 검색과 텍스트 생성을 결합하여 진정한 의미의 개인화되고 상황(context)에 맞는 사용자 경험을 실시간으로 제공할 것입니다. 이러한 프로세스를 검색 증강 생성(Retrieval-Augmented Generation), 즉 RAG라고 부릅니다. 그렇다면, RAG가 어떻게 실행되며, 이 프로세스에서 데이터베이스는 어떤 역할을 하는 것일까요? GenAI의 세계와 그 데이터베이스 요구 사항에 대해 자세히 살펴보도록 하겠습니다. MongoDB의 AI 리소스 페이지에서 MongoDB를 이용해 AI 기반 앱을 개발하는 데 대한 자세한 내용을 확인해 보십시오. AI 기반 모델 훈련의 과제 GenAI이 가진 주요 과제 중 하나는 프라이빗(private) 또는 비공개(proprietary) 데이터에 액세스할 수 없다는 것입니다. AI 기반 모델은 일반적으로 공개적으로 이용 가능한 데이터를 기초로 훈련되지만, 기밀 또는 비공개 정보에는 액세스할 수 없으며, LLM(Large Language Model)도 여기에 포함됩니다. 데이터가 공개 영역에 있더라도, 오래되었거나 관련성이 낮을 수 있습니다. LLM도 가장 최근의 이벤트나 지식을 인식하는 데 한계가 있습니다. 뿐만 아니라, 적합한 지침이 없다면, LLM은 부정확한 정보를 생성할 수 있으며, 이는 대부분 상황에서 받아들여질 수 없습니다. 데이터베이스는 이러한 과제를 해결하는 데 중요한 역할을 수행합니다. 애플리케이션은 LLM로 직접 프롬프트를 보내는 것이 아니라, 데이터베이스를 사용해 관련 데이터를 검색하고 이를 컨텍스트로서 프롬프트에 포함시킬 수 있습니다. 예를 들어, 뱅킹 애플리케이션은 레거시 데이터베이스에 사용자의 거래 데이터를 쿼리하고, 이를 프롬프트에 추가한 다음, 이 엔지니어링된 프롬프트를 LLM로 전달합니다. 이러한 접근 방식으로 LLM가 정확한 최신 응답을 생성하도록 보장함으로써 데이터 누락, 기간 만료된(stale) 데이터 및 부정확성 등과 같은 문제를 없앨 수 있습니다. GenAI 애플리케이션을 위한 주요 4가지 데이터베이스 고려 사항 모든 이들이 동일한 툴과 지식 기반을 활용할 수 있는 상황에서 기업들이 GenAI를 활용해 확실한 경쟁 우위를 달성하는 것은 결코 쉬운 일이 아닙니다. 오히려 차별화를 위한 핵심 열쇠는 기반 모델과 LLM이 지원하는 생성형 AI 위에 자사가 보유한 독점 데이터 계층을 쌓아 올리는 데 있을 것입니다. GenAI 기반 애플리케이션이 가진 모든 잠재력을 활용하기 위해 데이터베이스를 선택할 때, 기업들은 다음과 같은 4가지 주요 고려 사항을 중점적으로 살펴봐야 할 것입니다. 쿼리 기능(Queryability:) 데이터베이스는 리치 표현식 쿼리와 보조 인덱스를 지원함으로써 실시간 상황 인식(context-aware) 사용자 경험을 제공할 수 있어야 합니다. 이 기능은 쿼리의 복잡성이나, 데이터베이스에 저장된 데이터의 크기 등에 관계없이 밀리초(millisecond) 단위로 데이터를 검색할 수 있도록 보장합니다. 유연한 데이터 모델: GenAI 애플리케이션은 많은 경우, 다양한 유형 및 포맷의 데이터, 일명 멀티 모달(multi-modal) 데이터를 요구합니다. 변화하는 데이터 세트를 수용하기 위해 데이터베이스는 스키마 변경, 코드 수정 또는 버전 릴리스 등을 수행하지 않고도 새로운 데이터를 쉽게 온보딩할 수 있는 유연한 데이터 모델을 가지고 있어야 합니다. 관계형 데이터베이스는 정형 데이터를 처리하도록 설계되었으며, 엄격한 스키마 규칙에 따라 정보가 열과 행으로 이루어진 표로 정리됩니다. 따라서, 멀티 모달 데이터를 다루기 어려울 수 있습니다. 통합 벡터 검색(Integrated vector search): GenAI 애플리케이션은 free-form 텍스트, 오디오 또는 이미지 등 다양한 유형의 데이터에 대해 시맨틱(semantic) 또는 유사성(similarity)쿼리를 실행해야 할 수도 있습니다. 벡터 데이터베이스 내 벡터 임베딩은 시맨틱 또는 유사성 쿼리를 가능하게 합니다. 벡터 임베딩은 데이터의 시맨틱 의미와 상황(contextual) 정보를 포착하여 텍스트 분류, 머신 번역 및 시맨틱 분석 등과 같은 다양한 작업에 적합하게 만듭니다. 데이터베이스는 통합 벡터 검색 인덱싱 기능을 제공함으로써 2개의 별도 시스템을 유지하고 동기화하는 데 따른 복잡성을 없애고, 개발자들을 위해 일원화된 쿼리 언어를 보장합니다. 확장성(Scalability): GenAI 애플리케이션들은 사용자수와 데이터 사이즈의 측면에서 증가하고 있기 때문에 데이터베이스는 증가하는 데이터 볼륨과 요청 처리율(request rates)을 지원하도록 동적으로 스케일 아웃(scale-out)할 수 있어야 합니다. 스케일 아웃 샤딩(sharding)을 기본적으로 지원함으로써 데이터베이스의 한계가 비즈니스 성장을 방해하지 않도록 보장합니다. 최적의 데이터베이스 솔루션: MongoDB Atlas MongoDB Atlas는 GenAI의 고유한 요구 사항들을 처리할 수 있는 강력한 다목적 플랫폼입니다. MongoDB는 멀티 모달 데이터를 손쉽게 처리할 수 있도록 강력한 쿼리 API를 사용함으로써 개발자들이 작성하는 코드 수는 줄이면서 더 많은 작업을 수행할 수 있도록 합니다. MongoDB는 개발자들에게 가장 인기있는 도큐먼트 데이터베이스입니다. 도큐먼트가 객체 지향적인 프로그래밍 내에서 객체를 매핑하며, 이는 관계형 데이터베이스의 끝없는 행과 테이블보다 친숙하기 때문에 개발자들은 도큐먼트를 이용해 쉽고 직관적으로 작업할 수 있습니다. 유연한 스키마 설계를 통해 본질적으로 멀티 모달인 GenAI 활용 사례의 요구에 맞게 이러한 데이터 모델을 발전시킬 수 있습니다. Atlas는 샤딩(sharding)을 활용함으로써 GenAI 기반 애플리케이션에서 생성된 데이터와 요청의 급격한 증가를 지원하도록 스케일 아웃할 수 있습니다. MongoDB Atlas Vector Search 는 기본적으로 벡터 검색 인덱싱 기능을 내장하고 있기 때문에 2개의 시스템을 유지할 필요가 없습니다. Atlas는 소스 데이터를 통해 Vector Search 인덱스가 지속적으로 업데이트되도록 합니다. 개발자들은 단일 엔드포인트와 쿼리 언어를 활용해 정규 데이터베이스 쿼리 필터와 벡터 검색 필터를 결합한 쿼리를 작성할 수 있습니다. 이를 통해 마찰을 없애고 개발자들이 신속하게 GenAI 솔루션의 프로토타입을 만들고 제공할 수 있는 환경을 제공합니다. 결론 GenAI는 이제 곧 산업들을 변화시키고 산업 전반에서 혁신적인 솔루션을 제공하게 될 것입니다. 최적의 데이터베이스 솔루션을 활용하는 GenAI 애플리케이션들은 성공을 거두게 될 것이며, 오늘날 빠르게 변화하는 디지털 환경의 요구를 충족하는 정확하고, 상황을 인식하는 동적 데이터 기반 사용자 경험을 제공할 수 있을 것입니다. MongoDB Atlas를 통해 기업들은 민첩성, 생산성 및 성장을 달성함으로써 빠르게 변화하는 생성형 AI의 세계에서 경쟁 우위를 확보하게 될 것입니다. Atlas가 어떻게 기업들이 GenAI와 LLM 데이터를 통합하고 운영할 수 있도록 돕는지를 자세히 알아보시려면, MongoDB의 백서, "MongoDB를 이용한 생성형 AI 및 고급 검색 기능을 앱에 임베딩하기( Embedding Generative AI and Advanced Search into your Apps with MongoDB )"를 다운로드하십시오. 귀사에서 생성형 AI를 활용하는 데 대해 관심이 있으시면, 지금 바로 연락 주십시오. MongoDB가 어떻게 귀사의 디지털 전환을 지원할 수 있는지 알려드릴 것입니다.

December 29, 2023

MongoDB Design Reviews Help Customers Achieve Transformative Results

The pressure to deliver flawless software can weigh heavily on developers' minds and cause teams to second-guess their processes. While no amount of preparation can guarantee success, we've found that a design review conducted by members of the MongoDB Developer Relations team can go a long way in ensuring best practices have been followed and that optimizations are in place to help the team deliver confidently. Design reviews are hour-long sessions where we partner with our customers to help them fine-tune their data models for specific projects or use cases. They serve to give our customers a jump start in the early stages of application design when the development team is new to MongoDB and trying to understand how best to model their data to achieve their goals. A design review is a valuable enablement session that leverages the development team’s own workload as a case study to illustrate performant and efficient MongoDB design. We also help customers explore the art of the possible and put them on the right path toward achieving their desired outcomes. When participants leave these sessions, they carry the knowledge and confidence to evolve their designs independently. The underlying principle that characterizes these reviews is the domain-driven design ethos, an indispensable concept in software engineering. Design isn't merely a box to tick; it's a daily routine for developers. Design reviews are more than just academic exercises; they hold tangible goals. A primary aim is to enable and educate developers on a global scale, transitioning them away from legacy systems like Oracle. It's about supporting developers, helping them overcome obstacles, and imparting critical education and training. Mastery of the tools is essential, and our sessions delve deep into addressing access patterns and optimizing schema for performance. At its core, a design review is a catalyst for transformation. It's a collaborative endeavor, merging expertise and fostering an environment where innovation thrives. It's not just about reviewing. When our guidance and expertise are combined with developer innovation and talent, the journey from envisioning to implementing a robust data model becomes a shared success. During the session, our experts look at the workload's data-related functional requirements — like data entities and, in particular, reads and writes — along with non-functional requirements like growth rates, performance, and scalability. With these insights in hand, we can recommend target document schemas that help developers achieve the goals they established before committing their first lines of code. A properly designed document schema is fundamental for performant and cost-efficient operations. Getting schema wrong is often the number one reason why projects fail. Design reviews help customers avoid lost time and effort due to poor schemas. Design reviews in practice Not long ago, we were approached by a customer in financial services who wanted us to conduct a design review for an application they were building in MongoDB Atlas . The application was designed to give regional account managers a comprehensive view of aggregated performance data. Specifically, it aimed to provide insights into individual stock performance within a customer's portfolio across a specified time frame within a designated region. When we talked to them, the customer highlighted an issue with their aggregation pipeline , which was taking longer than expected, ranging from 20 to 40 seconds to complete. Their SLA demanded a response time of under two seconds. Most design reviews involve a couple of steps to assess and diagnose the problem. The first involves assessing the workload. During this step, a few of the things we look at include: Number of collections The documents in collections How many records documents contain How frequently data is being written or updated in the collections What hours of the day see the most activity How much storage is being consumed Whether and how old data is being purged from collections The cluster size the customer is running in MongoDB Once we performed this assessment for our finserv customer, we had a better understanding of the nature and scale of the workload. The next step was examining the structure of the aggregation pipeline. What we found was that the way data was being collected had a few unnecessary steps, such as breaking down the data and then reassembling it through various $unwind and $group stages. The MongoDB DevRel experts suggested using arrays to reduce the number of steps involved to just two: first, finding the right data, and then, looking up the necessary information. Eliminating the $group stage reduced the response time to 19 seconds — a significant improvement but still short of the target. In the next step of the design review, the MongoDB DevRel team looked to determine which schema design patterns could be applied to optimize the pipeline performance. In this particular case, there was a high volume of stock activity documents being written to the database every minute, but users were querying only a limited number of times per day. With this in mind, our DevRel team decided to apply the computed design pattern . The computed pattern is ideal when you have data that needs to be computed repeatedly in an application. By pre-calculating and saving commonly requested data, it avoids having to do the same calculation each time the data is requested. With our finserv customer, we were able to pre-calculate the trading volume and the starting, closing, high, and low prices for each stock. These values were then stored in a new collection that the $lookup pipeline could access. This resulted in a response time of 1800 ms — below our two-second target SLA, but our DevRel team wasn't finished. They performed additional optimizations, including using the extended reference pattern to embed region data in the pre-computed stock activity so that all the related data can be retrieved with a single query and avoiding the use of a $lookup-based join. After the team was finished with their optimizations, the final test execution of the pipeline resulted in a response time of 377 ms — a 60x improvement in the performance of their aggregation pipeline and more than four times faster than the application target response time. Read the complete story , including a step-by-step breakdown with code examples of how we helped one of our financial services customers achieve a 60x performance improvement. If you'd like to learn more about MongoDB data modeling and aggregation pipelines, we recommend the following resources: Daniel Coupal and Ken Alger’s excellent series of blog posts on MongoDB schema patterns Daniel Coupal and Lauren Schaefer’s equally excellent series of blog posts on MongoDB anti-patterns Paul Done’s ebook, Practical MongoDB Aggregations MongoDB University Course, " M320 - MongoDB Data Modeling " If you're interested in a Design Review, please contact your account representative .

December 21, 2023

Unleashing the Power of MongoDB Atlas and Amazon Web Services (AWS) for Innovative Applications

When you use MongoDB Atlas on AWS, you can focus on driving innovation and business value, instead of managing infrastructure. The combination of MongoDB Atlas, the premier developer data platform, and AWS, the largest global public cloud provider empowers organizations to create scalable and intelligent applications while streamlining their data infrastructure management. With MongoDB Atlas and AWS, building GenAI-powered applications is far simpler. MongoDB Vector Search enables developers to build intelligent applications powered by semantic search and generative AI over any type of data. Organizations can use their proprietary application data and vector embeddings to enhance foundation models like large language models (LLMs) via retrieval-augmented generation (RAG). This approach reduces hallucinations and delivers personalized user experiences while scaling applications seamlessly to meet evolving demands and maintaining top-tier security standards. MongoDB real-world use cases MongoDB helped Forbes accelerate provisioning, maintenance, and disaster-recovery times. Plus, the flexible data structures of MongoDB's document data model allows for faster development and innovation. In another example , a popular convenience store chain reported 99.995% uptime, freeing up its engineers and allowing them to focus on building innovative solutions thanks to Atlas Device Sync . Working with MongoDB helped functional food company MuscleChef transition from a food and beverage business with a website to a data-driven company that leverages customer insights to continuously improve and scale user experience, new product development, operations and logistics, marketing, and communications. Since working with MongoDB, repeat customer orders have surged 49%, purchase frequency saw a double-digit increase, and average order value is 50% higher than its largest competitors. Thousands of customers have been successful running MongoDB Atlas on the robust infrastructure offered by AWS. No-code enterprise application development platform Unqork helps businesses build apps rapidly without writing a line of code. Using MongoDB Atlas, the platform ingests data from multiple sources at scale and pushes it to applications and third-party services. Volvo Connect enables drivers and fleet managers to track trucks, activities, and even insights using a single administrative portal. The versatility and performance of Atlas combined with the AWS global cloud infrastructure helps the business connect critical aspects of their business in completely new ways. Verizon also opted to run Atlas on AWS to unlock the full power of its 5G mobile technology by moving compute elements to the network edge, making the user experience faster. A unified approach to data handling The Atlas developer data platform integrates all of the data services you need to build modern applications that are highly available, performant at global scale, and compliant with the most demanding security and privacy standards within a unified developer experience. With MongoDB Atlas running on AWS Global Cloud Infrastructure, organizations can leverage a single platform to store, manage, and process data at scale, allowing them to concentrate on building intelligent applications and driving business value. Atlas handles transactional data, app-driven analytics, full-text search, generative AI and vector search workloads, stream data processing, and more, all while reducing data infrastructure sprawl and complexity. MongoDB Atlas is available in 27 AWS regions. This allows organizations to deliver fast and consistent user experiences in any region and replicate data across multiple regions to reach end-users globally with high performance and low latency. Additionally, the ability to store data in specific zones ensures compliance with data sovereignty requirements. Security is paramount for both MongoDB Atlas and AWS. MongoDB Atlas is secure by default. It leverages built-in security features across your entire deployment. Atlas helps organizations comply with FedRAMP certification and regulations such as HIPAA, GDPR, PCI DSS, and more. It offers robust security measures , like our groundbreaking queryable encryption, which enables developers to run expressive queries on the encrypted data. MongoDB Atlas also enhances developer productivity with its fully managed developer data platform on AWS. It offers a unified interface/API for all data and application services, seamlessly integrating into development and deployment workflows. MongoDB Atlas also integrates with Amazon CodeWhisperer . This powerful combination accelerates developer innovation for a seamless coding experience, improved efficiency, and exceptional business growth. Conclusion MongoDB Atlas and AWS have worked together for almost a decade to offer a powerful solution for organizations looking to innovate and build intelligent applications. By simplifying data management, enhancing security, and providing a unified developer experience, they ensure that organizations can focus on what truly matters: driving innovation and delivering exceptional user experiences. If you're ready to get started, MongoDB Atlas is available in the AWS Marketplace, and you have the option to start with a free tier. Get started with MongoDB Atlas on AWS today .

November 14, 2023

Apono Streamlines Data Access with MongoDB Atlas

In today's world of ever-evolving cloud technology, many organizations are struggling to effectively manage data access. From companies that have no access policies in place and allow anyone to access any data, to those that have an existing solution but it's only on-premises, there's a desperate need for cloud-based access management. Apono is an easy-to-use platform that allows centralized access management, removing the trouble of having to depend on a single person to control access to the data. Apono brings reliable access management to the cloud, providing organizations with the security they need to protect their valuable information. And, as a member of the MongoDB for Startups program, Apono is accelerating its evolution as it seeks to expand its capabilities and its offering. MongoDB for Startups offers free MongoDB Atlas credits, one-on-one technical advice, co-marketing opportunities, and access to our vast partner network. Access that's as granular as you need it As organizations work to find the right balance of granular data access, they've often relied on a combination of workflow builders to make it happen. The way this often plays out is that just one person becomes the de facto expert in managing this system, leaving everyone else in the dark. And when they're gone, so is the expertise for managing ongoing access. Apono is a go-to solution for securely managing access to the most confidential and sensitive cloud resources businesses possess, from production environments to applications. It simplifies database access management across all three major cloud providers. A lot of database access management solutions only help with cluster access management, self-hosted databases, or cloud databases — but rarely not all of them. Apono enables organizations to manage access to database solutions whether they are self-hosted or in the cloud. Apono enables highly granular permissions, going beyond granting access to a cluster. It allows you to manage access to individual databases. In MongoDB Atlas, Apono goes as far as allowing you to manage access to individual collections. Apono is unique in its ability to offer that level of granular access management. Simplified and streamlined user experience From restricting read and write access to granting temporary permissions, Apono makes it easy for administrators to manage the entire process with a few clicks. According to the company's own internal data, about 80% of administrators are able to create access flows without any help in under two minutes. It's a very intuitive solution that also gives you full visibility into who is accessing or requesting access to resources and for how long. Administrators can choose how they want to interact with the Apono UX. They can use the intuitive administrator portal, the command line interface (CLI), Terraform, or the Apono API. From an end-user standpoint, Apono supports Slack, Teams, CLI, and a web portal with time-saving administrative features like request again and favorites. Additional time-savers include the ability to automate much of the process of granting permissions. Surprisingly, many organizations still handle permissions on an ad hoc basis through informal, one-off requests over text or email. Apono enables administrators to automate access flows, which not only saves time but is also more secure because it reduces the likelihood that someone will assign the wrong permission to a person or group by mistake. Apono also makes it easy to conduct access reviews, which are often required for regulatory purposes. These reviews can also be scheduled and automated so that reports are automatically shared with the stakeholders who need them. The security perimeter in the age of the cloud Back when most systems were primarily on-prem, it was critical to set up a security perimeter that limited access to anything behind the network firewall. Today, with remote work, cloud architectures, and the proliferation of edge devices, there is no longer one single firewall. Rather, identity has become the new security perimeter. "People work from anywhere, any IP, any device, even their phones. So it's becoming increasingly important to make sure that users have just the right amount of privileges," says Sharon Kisluk, Lead Product Manager at Apono. "If I give someone standing admin access to a cluster, what happens if they destroy the entire cluster by accident?" To prevent data loss due to human error or incorrect permissions, Apono works under the principle of least privilege, which means that any user or operation is allowed to access only the information and resources that are necessary for its legitimate purpose. That's why, out of the box, Apono gives you the ability to restrict all access to critical production environments. Multi-cloud access control The maturity of today's cloud computing has led to a large majority — around 87% — of companies to deploy to multiple cloud environments. Like MongoDB Atlas , Apono is available on all three major cloud platforms: AWS, Google Cloud, and Microsoft Azure. Also like MongoDB Atlas, Apono supports self-hosted Kubernetes. "We realized that people hate working with so many different role-based access control systems," says Kisluk. "Each system has its own user management. If you create policies or permissions in AWS, you have to do the same thing in Google Cloud and Azure if you're multi-cloud, and then you have to do the same thing for the databases." With Apono, you can create access flow bundles, which is a role abstraction that works across systems. For example, you can create a role called, "prod access" that enables you to access production databases and grant permission to only those who require access to those systems. And any system that's tagged as a production system will inherit those permissions, even if they're hosted by different cloud providers. Using MongoDB Atlas combined with Apono, administrators can establish global access policies and roll them out across the entire distributed system with just a few clicks. Product roadmap Apono was recently named to the Gartner Magic Quadrant for Privileged Access Management (PAM). While the recognition was unexpected at Apono, Kisluk says it just goes to show how Apono is truly the next thing in cloud PAM. Apono is expanding its cloud PAM by offering more complex access flow scenarios, or what is often referred to as, "if this, then that." These are scenarios that are triggered based on certain conditions being met. For example, if there's a production incident, you can grant access automatically for only the duration of the bug fix without submitting a special request. Get to know Apono Apono is a self-serve solution, so anyone can sign up with their email, connect to their cloud environment and database, and start using the product. Apono will also be at AWS re:Invent to be held in Las Vegas from November 27 to December 1. Don't forget to visit them and, of course, MongoDB and find out how these two powerful solutions are simplifying and streamlining privilege access management for developers and systems administrators. Sign up for our MongoDB for Startups program today!

October 30, 2023

Retrieval Augmented Generation (RAG): The Open-Book Test for Gen AI

The release of ChatGPT in November 2022 marked a groundbreaking moment for AI, introducing the world to an entirely new realm of possibilities created by the fusion of generative AI and machine learning foundation models, or large language models (LLMs). In order to truly unlock the power of LLMs, organizations need to not only access the innovative commercial and open-source models but also feed them vast amounts of quality internal and up-to-date data. By combining a mix of proprietary and public data in the models, organizations can expect more accurate and relevant LLM responses that better mirror what's happening at the moment. The ideal way to do this today is by leveraging retrieval-augmented generation (RAG), a powerful approach in natural language processing (NLP) that combines information retrieval and text generation. Most people by now are familiar with the concept of prompt engineering, which is essentially augmenting prompts to direct the LLM to answer in a certain way. With RAG, you're augmenting prompts with proprietary data to direct the LLM to answer in a certain way based on contextual data. The retrieved information serves as a basis for generating coherent and contextually relevant text. This combination allows AI models to provide more accurate, informative, and context-aware responses to queries or prompts. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Applying retrieval-augmented generation (RAG) in the real world Let's use a stock quote as an example to illustrate the usefulness of retrieval-augmented generation in a real-world scenario. Since LLMs aren't trained on recent data like stock prices, the LLM will hallucinate and make up an answer or deflect from answering the question entirely. Using retrieval-augmented generation, you would first fetch the latest news snippets from a database (often using vector embeddings in a vector database or MongoDB Atlas Vector Search ) that contains the latest stock news. Then, you insert or "augment" these snippets into the LLM prompt. Finally, you instruct the LLM to reference the up-to-date stock news in answering the question. With RAG, because there is no retraining of the LLM required, the retrieval is very fast (sub 100 ms latency) and well-suited for real-time applications. Another common application of retrieval-augmented generation is in chatbots or question-answering systems. When a user asks a question, the system can use the retrieval mechanism to gather relevant information from a vast dataset, and then it generates a natural language response that incorporates the retrieved facts. RAG vs. fine-tuning Users will immediately bump up against the limits of GenAI anytime there's a question that requires information that sits outside the LLM's training corpus, resulting in hallucinations, inaccuracies, or deflection. RAG fills in the gaps in knowledge that the LLM wasn't trained on, essentially turning the question-answering task into an “open-book quiz,” which is easier and less complex than an open and unbounded question-answering task. Fine-tuning is another way to augment LLMs with custom data, but unlike RAG it's like giving it entirely new memories or a lobotomy. It's also time- and resource-intensive, generally not viable for grounding LLMs in a specific context, and especially unsuitable for highly volatile, time-sensitive information and personal data. Conclusion Retrieval-augmented generation can improve the quality of generated text by ensuring it's grounded in relevant, contextual, real-world knowledge. It can also help in scenarios where the AI model needs to access information that it wasn't trained on, making it particularly useful for tasks that require factual accuracy, such as research, customer support, or content generation. By leveraging RAG with your own proprietary data, you can better serve your current customers and give yourself a significant competitive edge with reliable, relevant, and accurate AI-generated output. To learn more about how Atlas helps organizations integrate and operationalize GenAI and LLM data, download our white paper, Embedding Generative AI and Advanced Search into your Apps with MongoDB . If you're interested in leveraging generative AI at your organization, reach out to us today and find out how we can help your digital transformation.

October 26, 2023

4 Key Considerations for Unlocking the Power of GenAI

Artificial intelligence is evolving at an unprecedented pace, and generative AI (GenAI) is at the forefront of the revolution. GenAI capabilities are vast, ranging from text generation to music and art creation. But what makes GenAI truly unique is its ability to deeply understand context, producing outputs that closely resemble that of humans. It's not just about conversing with intelligent chatbots. GenAI has the potential to transform industries, providing richer user experiences and unlocking new possibilities. In the coming months and years, we'll witness the emergence of applications that leverage GenAI's power behind the scenes, offering capabilities never before seen. Unlike now popular chatbots like ChatGPT, users won't necessarily realize that GenAI is working in the background. But behind the scenes, these new applications are combining information retrieval and text generation to deliver truly personalized and contextual user experiences in real-time. This process is called retrieval-augmented generation, or RAG for short. So, how does retrieval-augmented generation (RAG) work, and what role do databases play in this process? Let's delve deeper into the world of GenAI and its database requirements. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. The challenge of training AI foundation models One of the primary challenges with GenAI is the lack of access to private or proprietary data. AI foundation models, of which large language models (LLMs) are a subset, are typically trained on publicly available data but do not have access to confidential or proprietary information. Even if the data were in the public domain, it might be outdated and irrelevant. LLMs also have limitations in recognizing very recent events or knowledge. Furthermore, without proper guidance, LLMs may produce inaccurate information, which is unacceptable in most situations. Databases play a crucial role in addressing these challenges. Instead of sending prompts directly to LLMs, applications can use databases to retrieve relevant data and include it in the prompt as context. For example, a banking application could query the user's transaction data from a legacy database, add it to the prompt, and then send this engineered prompt to the LLM. This approach ensures that the LLM generates accurate and up-to-date responses, eliminating the issues of missing data, stale data, and inaccuracies. Top 4 database considerations for GenAI applications It won't be easy for businesses to achieve real competitive advantage leveraging GenAI when everyone has access to the same tools and knowledge base. Rather, the key to differentiation will come from layering your own unique proprietary data on top of Generative AI powered by foundation models and LLMs. There are four key considerations organizations should focus on when choosing a database to leverage the full potential of GenAI-powered applications: Queryability: The database needs to be able to support rich, expressive queries and secondary indexes to enable real-time, context-aware user experiences. This capability ensures data can be retrieved in milliseconds, regardless of the complexity of the query or the size of data stored in the database. Flexible data model: GenAI applications often require different types and formats of data, referred to as multi-modal data. To accommodate these changing data sets, databases should have a flexible data model that allows for easy onboarding of new data without schema changes, code modifications, or version releases. Multi-modal data can be challenging for relational databases because they're designed to handle structured data, where information is organized into tables with rows and columns, with strict schema rules. Integrated vector search: GenAI applications may need to perform semantic or similarity queries on different types of data, such as free-form text, audio, or images. Vector embeddings in a vector database enable semantic or similarity queries. Vector embeddings capture the semantic meaning and contextual information of data making them suitable for various tasks like text classification, machine translation, and sentiment analysis. Databases should provide integrated vector search indexing to eliminate the complexity of keeping two separate systems synchronized and ensuring a unified query language for developers. Scalability: As GenAI applications grow in terms of user base and data size, databases must be able to scale out dynamically to support increasing data volumes and request rates. Native support for scale-out sharding ensures that database limitations aren't blockers to business growth. The ideal database solution: MongoDB Atlas MongoDB Atlas is a powerful and versatile platform for handling the unique demands of GenAI. MongoDB uses a powerful query API that makes it easy to work with multi-modal data, enabling developers to deliver more with less code. MongoDB is the most popular document database as rated by developers. Working with documents is easy and intuitive for developers because documents map to objects in object-oriented programming, which are more familiar than the endless rows and tables in relational databases. Flexible schema design allows for the data model to evolve to meet the needs of GenAI use cases, which are inherently multi-modal. By using sharding, Atlas scales out to support large increases in the volume of data and requests that come with GenAI-powered applications. MongoDB Atlas Vector Search embeds vector search indexing natively so there's no need to maintain two different systems. Atlas keeps Vector Search indexes up to date with the source data constantly. Developers can use a single endpoint and query language to construct queries that combine regular database query filters and vector search filters. This removes friction and provides an environment for developers to prototype and deliver GenAI solutions rapidly. Conclusion GenAI is poised to reshape industries and provide innovative solutions across sectors. With the right database solution, GenAI applications can thrive, delivering accurate, context-aware, and dynamic data-driven user experiences that meet the growing demands of today's fast-paced digital landscape. With MongoDB Atlas, organizations can unlock agility, productivity, and growth, providing a competitive edge in the rapidly evolving world of generative AI. To learn more about how Atlas helps organizations integrate and operationalize GenAI and LLM data, download our white paper, Embedding Generative AI and Advanced Search into your Apps with MongoDB . If you're interested in leveraging generative AI at your organization, reach out to us today and find out how we can help your digital transformation. Head over to our quick-start guide to get started with Atlas Vector Search today.

October 26, 2023

New Regulations Set to Snare Data-Handlers into Compliance

Now that the General Data Protection Regulation (GDPR) has become more firmly entrenched in the EU, several states in the U.S. are introducing similar data governance measures that will impose extra obligations on businesses that handle consumer data in those jurisdictions. California, Colorado, Connecticut, Utah, and Virginia all have new or amended data consumer privacy laws that have already gone into effect or are expected to by year's end. Control vs. controllers While most data privacy laws focus on giving consumers greater insight and control over their personal data, they also require data controllers and processors to protect the security and integrity of the data they handle for consumers. All five new state privacy laws require data controllers and processors to protect the information they process with reasonable data security measures. What constitutes reasonable remains up for debate, but recent trends point toward an information security program that goes beyond current requirements for safeguards and advocates for a more strategic approach based on risk assessment. Sectors like financial services and healthcare have long been accustomed to mandatory data security measures since both industries are subject to regulatory regimes — the Gramm-Leach-Bliley Act (GLBA), Financial Industry Regulatory Authority (FINRA), and the Payment Card Industry (PCI) for financial institutions, and the Health Insurance Portability and Accountability Act (HIPAA) for healthcare organizations. But gradually, the expansion of existing regulations and the introduction of new privacy laws at the state level are snaring more businesses that seek to do business in those jurisdictions. First, in 2013, the Omnibus Rule expanded the definition of a “business associate” to include all entities that create, receive, maintain, or transmit patient data on behalf of a covered entity as defined by HIPAA. So, businesses that were not previously subject to HIPAA requirements became bound by its requirements for safeguarding protected health information (PHI) if they had any in their systems or if they committed any transactions involving PHI. The Omnibus Rule was an early indicator that regulatory bodies would be casting a wider net to include not just traditional industry organizations but also data handlers that sat squarely in the middle of the data supply chain. Now, with more state consumer data privacy laws rolling out, more businesses will be required to implement reasonable security safeguards to protect any sensitive data anywhere in their systems. What's reasonable security? The National Institute of Standards and Technology (NIST) Cybersecurity Framework helps organizations better understand, manage, and mitigate cybersecurity risks. It encourages adaptability in the face of an evolving threat landscape, and the importance of data resilience measures to ensure the protection of critical assets and information. It's widely adopted across several industries for strengthening cybersecurity practices. The Center for Internet Security (CIS) also publishes examples of security controls that some state attorneys general specify as meeting a minimum level of information security that data handlers should meet. One of the universal threads running through most cybersecurity frameworks like those from NIST and CIS is the importance of data resilience. Data resilience is crucial because it ensures that important personal data like patient health records and bank customers' financial records remain available and intact, even in the face of unexpected events, such as hardware failures, cyberattacks, or natural disasters. It safeguards business continuity, preserves information integrity, and maintains trust by reducing the risk of data loss or downtime. Aside from the reputational harm that comes from being the victim of a cybersecurity event like a ransomware attack or data breach, there's an increasing risk that affected businesses will be subject to regulatory enforcement in the form of fines for running afoul of new restrictions. Security features and controls in MongoDB At MongoDB, we are intimately familiar with technical safeguards related to sensitive data and regulatory requirements as they relate to data security. MongoDB Atlas is designed for the needs of businesses in regulated industries. Atlas is a global, multi-cloud application data platform built around a resilient, performant, and scalable distributed database designed to ensure important data remains intact and available. Atlas is architected to provide automated database resilience and mitigate the downtime risks associated with hardware failures, unintended actions, and targeted attacks. Atlas clusters offer continuous cloud backups and multi-region clusters for database redundancy as well as multi-cloud clusters for cross-cloud database resilience. Atlas automatically distributes data across clouds based on how you've configured it, making managing multi-cloud clusters extremely easy. Multi-cloud cluster deployments are particularly relevant for organizations that must comply with data sovereignty regulations but have limited deployment options due to sparse regional coverage from their primary cloud provider. With MongoDB Atlas, administrators can encrypt MongoDB data in transit over the network and at rest in permanent storage and backups. For data in transit, support for TLS allows clients to connect to MongoDB over an encrypted channel. Data is automatically encrypted while at rest through transparent disk encryption at all three major cloud providers, AWS, Google Cloud, and Microsoft Azure. Additionally, MongoDB’s in-use encryption technologies like client-side Field-Level Encryption (FLE) and Queryable Encryption enable administrators to selectively encrypt sensitive fields, with each optionally secured with its own key. All encrypted fields on the server – stored in-memory, in system logs, at-rest, and in backups – are rendered as ciphertext, making them unreadable to any party and are only decrypted on the client side using the encryption keys. MongoDB also offers a complete set of administrative features that enable organizations to create, deploy, and manage policies for data access according to their own internal requirements, including database authentication, multi-factor authorization (MFA), and role-based access controls (RBAC). Of course, no business wants to lose data, and every business would prefer to avoid the reputational harm that comes from data breaches having data held for ransom. With the potential for hefty fines for running afoul of new privacy legislation, businesses have even more reasons to implement protective measures to ensure the resilience of their systems. As regulatory creep continues to expand across the data landscape, businesses must take it upon themselves to ensure data integrity and resilience are high priorities across the organization. For more information on data resilience features in MongoDB Atlas, download our Data Resilience Strategy with MongoDB Atlas whitepaper.

October 24, 2023

應用應用強大的生成式 AI 需考量的四大重點

現今人工智慧以前所未有的速度快速發展,而生成式 AI 就站在這股風潮的浪頭上。生成式 AI 能力可以應用的範圍非常廣泛,從文本生成到音樂和藝術創作。但生成式 AI 之所以與眾不同,是因為可以深度解讀文本,產出與人類所寫的文本相似的結果。這不只可以用在智慧聊天機器人上,生成式 AI 擁有的潛力足以撼動產業,提供更豐富的使用者體驗,開啟多樣的可能性。 在未來幾個月和幾年的時間,我們將會看到越來越多使用生成式 AI 的應用程式,展現過去未曾見過的能力。不像現在熱門的聊天機器人 ChatGPT,使用者可能根本不會發現自己互動的對象是生成式 AI。生成式 AI 隱身在這些新式應用程式的背後,進行資訊檢索和文本生成,即時提供真正量身定做和符合情境的使用者經驗,這個流程稱為檢索增強生成(RAG)。 那麼檢索增強生成(RAG)是怎麼運作的呢?資料庫在過程中扮演的角色是什麼?跟我們一起探索生成式 AI 的世界以及它對資料庫的要求吧! 造訪我們的 AI 資源頁面 ,進一步了解如何使用 MongoDB 建立 AI 驅動應用程式。 訓練 AI 基礎模型的挑戰 無法存取私人或專屬資料是生成式 AI 目前面臨的主要挑戰之一。AI 基礎模型中,像是 大型語言模型 (LLM)就是其中的子集,通常是使用公開資料訓練,沒有辦法取用機密或是專屬資訊。就算是放在公開主機的,通常也是過時和不相關的資料。而且 LLM 在辨識非常近期發生的事件或知識上也有所限制。另外沒有經過適當的指引,LLM 有可能會產出錯誤的資訊,這在許多情境下都是無法允許的。 資料庫在解決這些挑戰上,扮演了非常重要的角色。應用程式可以直接使用資料庫,檢索相關資料,並把內容以文本的方式放入指令之中,而非直接傳送指令到 LLM。像是金融應用程式可以先在傳統資料庫中,查詢使用者的交易資料,接著把內容加入指令中,再把調整過後的指令傳到 LLM。這個方法可以確保 LLM 生成正確且最新的回應,避免資料缺失、資料過時和資料錯誤所造成的問題。 為生成式 AI 應用程式選擇資料庫的四大考量 現今每個人都能存取同樣的工具及知識庫,因此對公司來說要透過生成式 AI 取得競爭優勢,不是一件容易的事。但是可以創造出差異化的關鍵就在於企業 獨有的專屬資料 ,將之分層放入由基礎模型和 LLM 支援的生成式 AI 之上。想要完全實現生成式 AI 強化應用程式潛力的組織,在選擇資料庫上有四點需要考量的面向: 可查詢性:資料庫必須有能力支援豐富且具表達性的查詢及二級索引,進而即時提供情境感知的使用者經驗。不論查詢的複雜性高低,或是存在資料庫中的資料大小,這項能力能確保可以在毫秒內檢索資料。 具彈性的資料模型:生成式 AI 應用程式通常需要不同類型和格式的資料,也就是多模態資料。為了適應資料集的變化,資料庫需要具備彈性的資料模型,才能輕鬆導入新資料,不需要改變資料表結構 、修改編碼或發行新版本。對關聯式 資料庫來說,多模態資料是一種挑戰,因為這種資料庫是為處理結構化資料而設計,會將資訊整理為以欄列表示的表格,並有著嚴格的資料表結構的 規則。 整合式向量搜尋:生成式應用程式需要針對不同類型的資料,像是無特定結構的文本、音檔或圖像,進行語意或相似性查詢。向量資料庫中的向量嵌入使得系統能進行語意或相似性查詢,向量嵌入捕捉資料中的語意及文本資訊,因此適合用於各種任務,例如文本分類、機器翻譯和情感分析。資料庫應該要提供整合向量搜尋索引編輯的功能,以避免需要使用兩個不同系統來同步資料的複雜性,並且確保開發人員使用統一的查詢語言。 可擴展性:隨著生成式 AI 應用程式使用者數量及資料規模不斷成長,資料庫需要持續進行擴展,以支援持續增加的資料量及請求率。資料庫可以支援擴展分片,就能確保容量限制不會成為公司發展的絆腳石。 最理想的資料庫解決方案:MongoDB Atlas 在處理生成式 AI 獨特的要求上,MongoDB Atlas 是非常強大且用途廣泛的平台。MongoDB 使用強大的查詢 API,可以輕鬆操作多模態資料,開發人員在過程中不需要寫這麼多的編碼。MongoDB 是最受開發人員歡迎的文件資料庫,操作文件對開發人員來說非常簡單直覺,因為文件可以比對物件導向程式中的物件,比關聯式資料庫中無限的欄列表格更讓人駕輕就熟。具彈性的資料表結構 設計可幫助資料模型持續演化,滿足本質上多模態的生成式 AI 案例 的需求。Atlas 透過使用分片進行擴展,支援大量增加的資料以及來自生成式 AI 應用程式的請求。 MongoDB Atlas Vector Search 直接嵌入了向量搜尋索引編輯,因此不需要維護兩個不同的系統。Atlas 透過來源資料,隨時更新 Vector Search 的索引。開發人員可以使用單一端點和查詢語言,建立結合一般資料庫查詢篩選和向量搜尋篩選的查詢。可避免在使用上造成適應問題,為開發人員提供快速原型化並產出生成式 AI 解決方案的環境。 結論 生成式 AI 已經準備好重塑產業,為各大公司部門提供創新解決方案。有了適合的資料庫解決方案,生成式 AI 應用程式就能蓬勃發展,提供正確、情境感知並應用動態資料驅動的使用者體驗,滿足現今變化快速的數位環境成長所需。有了 MongoDB Atlas,組織就能提升敏捷度、生產力和成長,在這個快速演化的生成式 AI 世界裡擁有競爭力。 進一步了解 Atlas 如何幫助組織整合並操作生成式 AI 和 LLM 資料,下載我們的白皮書 運用 MongoDB 將生成式 AI 及進階搜尋嵌入你的應用程式 。若你想在自家組織內應用生成式 AI,歡迎馬上 與我們聯繫 ,了解我們可以如何協助你進行數位轉型。

October 23, 2023

How to Avoid GenAI Sprawl and Complexity

There's no doubt that generative AI and large language models (LLMs) are disruptive forces that will continue to transform our industry and economy in profound ways. But there's also something very familiar about the path organizations are taking to tap into GenAI capabilities. It's the same journey that happens anytime there's a need for data that serves a very specific and narrow purpose. We've seen it with search where bolt-on full-text search engines have proliferated, resulting in search-specific domains and expertise required to deploy and maintain them. We've also seen it with time-series data where the need to deliver real-time experiences while solving for intermittent connectivity has resulted in a proliferation of edge-specific solutions for handling time-stamped data. And now we're seeing it with GenAI and LLMs, where niche solutions are emerging for handling the volume and velocity of all the new data that organizations are creating. The challenge for IT decision-makers is finding a way to capitalize on innovative new ways of using and working with data while minimizing the extra expertise, storage, and computing resources required for deploying and maintaining purpose-built solutions. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Purpose-built cost and complexity The process of onboarding search databases illustrates the downstream effects that adding a purpose-built database has on developers. In order to leverage advanced search features like fuzzy search and synonyms, organizations will typically onboard a search-specific solution such as Solr, Elasticsearch, Algolia, and OpenSearch. A dedicated search database is yet another system that requires already scarce IT resources to deploy, manage, and maintain. Niche or purpose-built solutions like these often require technology veterans who can expertly deploy and optimize them. More often than not, it's the responsibility of one person or a small team to figure out how to stand up, configure, and optimize the new search environment as they go along. Time-series data is another example. The effort it takes to write sync code that resolves conflicts between the mobile device and the back end consumes a significant amount of developer time. On top of that, the work is non-differentiating since users expect to see up-to-date information and not lose data as a result of poorly written conflict-resolution code. So developers are spending precious time on work that is not of strategic importance to the business, nor does it differentiate their product or service from your competition. The arrival and proliferation of GenAI and LLMs is likely to accelerate new IT investments in order to capitalize on this powerful, game-changing technology. Many of these investments will take the form of dedicated technology resources and developer talent to operationalize. But the last thing tech buyers and developers need is another niche solution that pulls resources away from other strategically important initiatives. Documents to the rescue Leveraging GenAI and LLMs to gain new insights, create new user experiences, and drive new sources of revenue can entail something other than additional architectural sprawl and complexity. Drawing on the powerful document data model and an intuitive API, the MongoDB Atlas developer data platform allows developers to move swiftly and take advantage of fast-paced breakthroughs in GenAI without having to learn new tools or proprietary services. Documents are the perfect vehicle for GenAI feature development because they provide an intuitive and easy-to-understand mapping of data into code objects. Plus, the flexibility they provide enables developers to adapt to ever-changing application requirements, whether it's the addition of new types of data or the implementation of new features. The huge diversity of your typical application data and even vector embeddings of thousands of dimensions can all be handled with documents. The MongoDB Query API makes developers' lives easier, allowing them to use one unified and consistent system to perform CRUD operations while also taking advantage of more sophisticated features such as keyword and vector search , analytics, and stream processing — all without having to switch between different query languages and drivers, helping to keep your tech stack agile and streamlined. Making the most out of GenAI AI-driven innovation is pushing the envelope of what is possible in terms of the user experience — but to find real transformative business value, it must be seamlessly integrated as part of a comprehensive, feature-rich application that moves the needle for companies in meaningful ways. MongoDB Atlas takes the complexity out of AI-driven projects. Our intuitive developer data platform streamlines the process of bringing new experiences to market quickly and cost-effectively. With Atlas, you can reduce the risk and complexity associated with operational and security models, data wrangling, integration work, and data duplication. To find out more about how Atlas helps organizations integrate and operationalize GenAI and LLM data, download our white paper, Embedding Generative AI and Advanced Search into your Apps with MongoDB . If you're interested in leveraging generative AI at your organization, reach out to us today and find out how we can help your digital transformation.

October 12, 2023