Eric Holzhauer

16 results

Announcing MongoDB Relational Migrator

We’re thrilled to announce a new tool: MongoDB Relational Migrator . Relational Migrator simplifies the process of moving workloads from relational databases to MongoDB. We’ve heard it from more of our customers than we can count: organizations want to replatform existing applications from relational databases to MongoDB. MongoDB is more intuitive, more flexible, and more scalable than relational databases. Customers tell us that they need to move away from a relational backend in order to build new functionality into existing applications with increased agility, to make new and better use of enterprise data, or to scale existing services to volumes or usage patterns that they were never designed to handle. While some customers have successfully migrated some of their relational workloads to MongoDB, many have struggled with how to approach this challenge. Requirements vary. Can we decommission the old database, or does it need to stay running? Is this a wholesale replatforming, or are we carving out pieces of functionality to move to MongoDB? Some customers end up using a variety of ETL, CDC, message queue, streaming, pub/sub, or other technology to move data into MongoDB, but others have decided it’s just too difficult. It’s also important to think carefully about data modeling as part of a migration. Though it’s possible to naively move a relational schema into MongoDB without any changes, that won’t deliver many of MongoDB’s benefits. A better practice is to design a new and better MongoDB schema that’s more denormalized and potentially to take the opportunity to revise the architecture of the application as well. We want to make this process easier, which is why we’re developing MongoDB Relational Migrator. Relational Migrator streamlines the process of moving to MongoDB from a relational database and is compatible with Oracle, Microsoft SQL Server, MySQL, and PostgreSQL. Migrator connects to a relational database to analyze its existing schema, then helps architects design and map to a new MongoDB schema. When you’re ready, Migrator will perform the data migration from the source RDBMS to MongoDB. Migration can be a one-shot migration if you’re prepared for a hard cutover; soon, we will also support a continuous sync if you need to leave the source system running and continue pushing changes into MongoDB. We know that moving long-running systems to MongoDB still isn’t as simple as pushing a button, which is why Relational Migrator is designed to be used with assistance from our Field Engineering teams. For example, as part of a consulting engagement with MongoDB, a consulting engineer can help you evaluate which applications are the best candidates for migration, design and implement a new MongoDB backend, and execute the migration. Relational Migrator will significantly lower the effort and risk in transforming and replicating your data, leaving more time to focus on other aspects of application modernization. If you’ve been trying to figure out how to get off of a relational database, get in touch to learn more about MongoDB Relational Migrator.

June 7, 2022

Announcing GA of the MongoDB Atlas Operator for Kubernetes

We’re excited to announce the general availability of the Atlas Kubernetes Operator , the best way to use MongoDB with Kubernetes. The Atlas Kubernetes Operator makes it easy to deploy, manage, and access MongoDB Atlas from your preferred Kubernetes distribution. When the operator is installed into your Kubernetes environment, it exposes Kubernetes custom resources to fully manage projects, deployments (clusters and serverless instances), network access (IP Access Lists and Private Endpoints), database users, backup, and more. For a full list of capabilities, check out the Atlas Operator documentation . The Atlas Operator is designed to Kubernetes standards. It’s open source and built with the CNCF Operator Framework, so you can have confidence that it will work with your Kubernetes environment. The Operator supports any Certified Kubernetes Distribution and is OpenShift-certified . With the Operator, you can easily manage your Atlas resources directly from Kubernetes, using the Kubernetes API. This means no switching between systems: you can manage your containerized applications and the data layer powering them from a single control plane. This also makes it easy to integrate Atlas into your Kubernetes-native CI/CD pipelines, automatically setting up and tearing down infrastructure as part of your deployment process. Why Kubernetes and MongoDB Atlas? Atlas is a multi-cloud document database that provides the versatility you need to build sophisticated and resilient applications. It has built-in high availability, is easily scalable, and is flexible enough to support rapid iteration and shipping of new application features. This makes it a great fit for the modern development and deployment practices that containerization and Kubernetes support. It’s also incredibly simple to deploy multi-cloud clusters or move between clouds on Atlas — a good match for the portability that containers provide. Learn more about the Atlas Operator for Kubernetes or get going right away with the Atlas Operator Quick Start .

June 6, 2022

Introducing Pay as You Go MongoDB Atlas on AWS Marketplace

We’re excited to introduce a new way of paying for MongoDB Atlas . AWS customers can now pay Atlas charges via our new AWS Marketplace listing . Through this listing, individual developers can enjoy a simplified payment experience via their AWS accounts, while enterprises now have another way to procure MongoDB in addition to privately negotiated offers, already supported via AWS Marketplace. Previously, customers who wanted to pay via AWS Marketplace had to commit to a certain level of usage upfront. Pay as you go is available directly in Atlas via credit card, PayPal, and invoice — but not in AWS Marketplace, until today. With this new listing and integration, you can pay via AWS with no upfront commitments . Simply subscribe via AWS Marketplace and start using Atlas. You can get started for free with Atlas’s free-forever tier , then scale as needed. You’ll be charged in AWS only for the resources you use in Atlas, with no payment minimum. Deploy, scale, and tear down resources in Atlas as needed; you’ll pay just for the hours that you’re using them. Atlas comes with a Basic Support Plan via in-app chat. If you want to upgrade to another Atlas support plan , you can do so in Atlas. Usage and support costs will be billed together to your AWS account daily. If you’re connecting Atlas to applications running in AWS, or integrating with other AWS services , you’ll be able to see all your costs in one place in your AWS account. To get started with Atlas via AWS Marketplace, visit our Marketplace listing and subscribe using your account. You’ll then be prompted to either sign in to your existing Atlas account or sign up for a new Atlas account . Try MongoDB Atlas for Free Today!

December 15, 2021

Announcing Google Private Service Connect (PSC) Integration for MongoDB Atlas

We’re excited to announce the general availability of Google Cloud Private Service Connect (PSC) as a new network access management option in MongoDB Atlas . Announced alongside the availability of MongoDB 5.1 , Google Cloud PSC is GA for use with Altas. See the documentation for instructions on setting up Google Cloud PSC for Atlas, or read on for more information. MongoDB Atlas is secure by default . All dedicated Google Cloud clusters on Atlas are deployed in their own VPC. To set up network security controls, Atlas customers already have the options of an IP Access List and VPC Peering . The IP Access List in Atlas is a straightforward and secure connection mechanism, and all traffic is encrypted with end-to-end TLS. But you must be able to provide static public IPs for your application servers to connect to Atlas, and to list those IPs in the Access List. If your applications don’t have static public IPs or if you have strict requirements on outbound database access via public IPs, this won’t work for you. The existing solution to this is VPC Peering, which allows you to configure a secure peering connection between your Atlas cluster’s VPC and your own Google Cloud VPC(s). This is easy, but the connections are two way. Atlas never has to initiate connections to your environment, but some Atlas users don’t want to use VPC peering because it extends the perceived network trust boundary. Access Control Lists (ACLs) and IAM Groups can control this access, but they require additional configuration. MongoDB Atlas and Google Cloud PSC Now, you can use Google Cloud Private Service Connect to connect a VPC to MongoDB Atlas. Private Service Connect allows you to create private and secure connections from your Google Cloud networks to MongoDB Atlas. It creates service endpoints in your VPCs that provide private connectivity and policy enforcement, allowing you to easily control network security in one place. This brings two major advantages: Unidirectional: connections via PSC use a private IP within the customer’s VPC, and are unidirectional. Atlas cannot initiate connections back to the customer's VPC. This means that there is no extension of the perceived network trust boundary. Transitive: connections to the PSC private IPs within the customer’s VPC can come transitively from an on-prem data center connected to the PSC-enabled VPC with Cloud VPN . Customers can connect directly from their on-prem data centers to Atlas without using public IP Access Lists. Google Cloud Private Service Connect offers a one-way network peering service between a Google Cloud VPC and a MongoDB Atlas VPC Meeting security requirements with Atlas on Google Cloud Google Cloud PSC adds to the security capabilities that are already available in MongoDB Atlas, like Client Side Field-Level Encryption , database auditing , BYO key encryption with Google Cloud KMS integration , federated identity , and more. MongoDB Atlas undergoes independent verification of security and compliance controls , so you can be confident in using Atlas on Google Cloud for your most critical workloads. To learn more about configuring Google PSC with MongoDB Atlas, visit our docs . If you’re already managing your Atlas clusters with our API, you can add a private endpoint with the documentation here . For more information about Google Cloud Private Service Connect, visit the Google Cloud docs or read the Introducing Private Service Connect release announcement. Try MongoDB Atlas for free today!

November 11, 2021

MongoDB Atlas for Government Achieves "FedRAMP In-process"

We are pleased to announce that MongoDB Atlas for Government has achieved the FedRAMP designation of “ In-process ”. This status reflects MongoDB’s continued progress toward a FedRAMP Authorized modern data platform for the US Government. Earlier this year, MongoDB Atlas for Government achieved the designation of FedRAMP Ready . MongoDB is widely used across the Federal Government, including the Department of Veterans Affairs, the Department of Health & Human Services (HHS), the General Services Administration, and others. HHS is also sponsoring the FedRAMP authorization process for MongoDB. What is MongoDB Atlas for Government? MongoDB Atlas for Government is an independent environment of our flagship cloud product MongoDB Atlas. Atlas for Government has been built for US government needs. It allows federal, state, and local governments as well as educational institutions to build and iterate faster using a modern database-as-a-service platform. The service is available in AWS GovCloud (US) and AWS US East/West regions. MongoDB Atlas for Government Highlights: Atlas for Government clusters can be created in AWS GovCloud East/West or AWS East/West regions. Atlas for Government clusters can span regions within AWS GovCloud or within AWS. Atlas core features such as automated backups, AWS PrivateLink, AWS KMS, federated authentication, Atlas Search, and more are fully supported Applications can use client-side field level encryption with AWS KMS in GovCloud or AWS East/West. Getting started and pricing MongoDB Atlas for Government is available to Government customers or companies that sell to the US Government. You can buy Atlas for Government through AWS GovCloud or the AWS marketplace . Please fill out this form and a representative will get in touch with you. To learn more about Atlas for Government, visit the product page , check out the documentation , or read the FedRAMP FAQ .

September 22, 2021

MongoDB Atlas for Government

We are pleased to announce the general availability of MongoDB Atlas for Government, which is an independent environment of our flagship cloud product MongoDB Atlas that’s built for US government needs. It will allow federal, state, and local governments as well as educational institutions to build and iterate faster using a modern database-as-a-service platform. The service is available in AWS GovCloud (US) and AWS US East/West regions. We are also pleased to announce that MongoDB Atlas for Government has been approved as FedRAMP Ready . FedRAMP Ready indicates that a third-party assessment organization has vouched for a cloud service provider’s security capabilities, and the FedRAMP PMO has reviewed and approved the Readiness Assessment Report. MongoDB Atlas for Government Highlights: Atlas for Government clusters can be created in AWS GovCloud East/West or AWS East/West regions. Atlas for Government clusters can span regions within AWS GovCloud or within AWS (but not across those two environments). Atlas core features such as automated backups, AWS PrivateLink, AWS KMS, federated authentication, Atlas Search, and more are fully supported Applications can use client-side field level encryption with AWS KMS in GovCloud or AWS East/West. Getting Started and Pricing: MongoDB Atlas for Government is available to Government customers or companies that sell to the US Government. You can buy Atlas for Government through AWS GovCloud or AWS marketplace . Of course, you can also work directly with MongoDB; please fill out this form and a representative will get in touch with you. To learn more about Atlas for Government, visit the product page , check out the documentation , or read the FedRAMP FAQ .

June 28, 2021

Flowhub Relies on MongoDB to Meet Changing Regulations and Scale Its Business

The legal landscape for cannabis in the United States is in constant flux. Each year, new states and other jurisdictions legalize or decriminalize it, and the regulations governing how it can be sold and used change even more frequently. For companies in the industry, this affects not only how they do business, but also how they manage their data. Responding to regulatory changes requires speedy updates to software and a database that makes it easy to change the structure of your data as needed – and that’s not to mention the scaling needs of an industry that’s growing incredibly rapidly. Flowhub makes software for the cannabis industry, and the company is leaping these hurdles every day. I recently spoke with Brad Beeler , Lead Architect at Flowhub, about the company, the challenges of working in an industry with complex regulations, and why Flowhub chose MongoDB to power its business. We also discussed how consulting from MongoDB not only improved performance but also saved the company money, generating a return on investment in less than a month. Eric Holzhauer: First, can you tell us a bit about your company? Brad Beeler: Flowhub provides essential technology for cannabis dispensaries. Founded in 2015, Flowhub pioneered the first Metrc API integration to help dispensaries stay compliant. Today, over 1,000 dispensaries trust Flowhub's point of sale, inventory management, business intelligence, and mobile solutions to process $3B+ cannabis sales annually. Flowhub in use in a dispensary EH: How is Flowhub using MongoDB? BB: Essentially all of our applications – point of sale, inventory management, and more – are built on MongoDB, and we’ve been using MongoDB Atlas from the beginning. When I joined two and a half years ago, our main production cluster was on an M40 cluster tier, and we’ve now scaled up to an M80. The business has expanded a lot, both by onboarding new clients with more locations and by increasing sales in our client base. We’re now at $3 billion of customer transactions a year. As we went through that growth, we started by making optimizations at the database level prior to throwing more resources at it, and then went on to scale the cluster. One great thing about Atlas is that it gave us the metrics we needed to understand our growth. After we’d made some optimizations, we could look at CPU and memory utilization, check that there wasn’t a way to further improve query execution with indexes, and then know it was time to scale. It’s really important for usability that we keep latency low and that the application UI is responsive, and scaling in Atlas helps us ensure that performance. We also deploy an extra analytics node in Atlas, which is where we run queries for reporting. Most of our application data access is relatively straightforward CRUD, but we run aggregation pipelines to create reports: day-over-day sales, running financials, and so forth. Those reports are extra intensive at month-end or year-end, when our customers are looking back at the prior period to understand their business trends. It’s very useful to be able to isolate that workload from our core application queries. I’ll also say that MongoDB Compass has been an amazing tool for creating aggregation pipelines. EH: Can you tell us some more about what makes your industry unique, and why MongoDB is a good fit? BB: The regulatory landscape is a major factor. In the U.S., there’s a patchwork of regulation, and it continues to evolve – you may have seen that several new states legalized cannabis in the 2020 election cycle. States are still exploring how they want to regulate this industry, and as they discover what works and what doesn’t, they change the regulations fairly frequently. We have to adapt to those changing variables, and MongoDB facilitates that. We can change the application layer to account for new regulations, and there’s minimal maintenance to change the database layer to match. That makes our development cycles faster and speeds up our time to market. MongoDB’s flexibility is great for moving quickly to meet new data requirements. As a few concrete examples: The state of Oregon wanted to make sure that consumers knew exactly how much cannabis they were purchasing, regardless of format. Since some dispensaries sell prerolled products, they need to record the weight of the paper associated with those products. So now that’s a new data point we have to collect. We updated the application UI to add a form field where the dispensary can input the paper weight, and that data flows right into the database. Dispensaries are also issuing not only purchase receipts, but exit labels like what you’d find on a prescription from a pharmacy. And depending on the state, that exit label might include potency level, percentage of cannabinoids, what batch and package the cannabis came from, and so on. All of that is data we need to be storing, and potentially calculating or reformatting according to specific state requirements. Everything in our industry is tracked from seed to sale. Plants get barcodes very early on, and that identity is tracked all the way through different growth cycles and into packaging. So if there’s a recall, for example, it’s possible to identify all of the products from a specific plant, or plants from a certain origin. Tracking that data and integrating with systems up the supply chain is critical for us. That data is all tracked in a regulatory system. We integrate with Metrc , which is the largest cannabis tracking system in the country. So our systems feed back into Metrc, and we automate the process of reporting all the required information. That’s much easier than a manual alternative – for example, uploading spreadsheets to Metrc, which dispensaries would otherwise need to do. We also pull information down from Metrc. When a store receives a shipment, it will import the package records into our system, and we’ll store them as inventory and get the relevant information from the Metrc API. Flowhub user interface EH: What impact has MongoDB had on your business? BB: MongoDB definitely has improved our time to market in a couple of ways. I mentioned the differences of regulation and data requirements across states; MongoDB’s flexibility makes it easier to launch into a new state and collect the right data or make required calculations based on data. We also improve time to market because of developer productivity. Since we’re a JavaScript shop, JSON is second nature to our developers, and MongoDB’s document structure is very easy to understand and work with. EH: What version of MongoDB are you using? BB: We started out on 3.4, and have since upgraded to MongoDB 4.0. We’re preparing to upgrade to 4.2 to take advantage of some of the additional features in the database and in MongoDB Cloud. One thing we’re excited about is Atlas Search : by running a true search engine close to our data, we think we can get some pretty big performance improvements. Most of our infrastructure is built on Node.js, and we’re using the Node.js driver . A great thing about MongoDB’s replication and the driver is that if there’s a failover and a new primary is elected, the driver keeps chugging, staying connected to the replica sets and retrying reads and writes if needed. That’s prevented any downtime or connectivity issues for us. EH: How are you securing MongoDB? BB: Security is very important to us, and we rely on Atlas’s security controls to protect data. We’ve set up access controls so that our developers can work easily in the development environment, but there are only a few people who can access data in the production environment. IP access lists let us control who and what can access the database, including a few third-party applications that are integrated into Flowhub. We’re looking into implementing VPC Peering for our application connections, which currently go through the IP access list. We’re also interested in Client-Side Field-Level Encryption . We already limit the amount of personally identifiable information (PII) we collect and store, and we’re very careful about securing the PII we do need to store. Client-Side Field-Level Encryption would let us encrypt that at the client level, before it ever reaches the database. EH: You're running on Atlas, so what underlying cloud provider do your use? BB: We’re running everything on Google Cloud. We use Atlas on Google Cloud infrastructure, and our app servers are running in Google Kubernetes Engine. We also use several other Google services. We rely pretty heavily on Google Cloud Pub/Sub as a messaging backbone for an event-driven architecture. Our core applications initially were built with a fairly monolithic architecture, because it was the easiest approach to get going quickly. As we’ve grown, we’re moving more toward microservices. We’ve connected Pub/Sub to MongoDB Atlas, and we’re turning data operations into published events. Microservices can then subscribe to event topics and use the events to take action and maintain or audit local data stores. Our data science team uses Google BigQuery as the backend to most of our own analytics tooling. For most uses, we migrate data from MongoDB Atlas to BigQuery via in-house ETL processes, but for more real-time needs we’re using Google Dataflow to connect to MongoDB’s oplog and stream data into BigQuery. EH: As you grow your business and scale your MongoDB usage, what's been the most important resource for you? BB: MongoDB’s Flex Consulting has been great for optimizing performance and scaling efficiently. Flowhub has been around for a number of years, and as we’ve grown, our database has grown and evolved. Some of the schema, query, and index decisions that we had made years ago weren’t optimized for what we’re doing now, but we hadn’t revisited them comprehensively. Especially when we were scaling our cluster, we knew that we could make more improvements. Our MongoDB Consulting Engineer investigated our data structure and how we were accessing data, performance, what indexes we had, and so on. We even got into the internals of the WiredTiger storage engine and what optimizations we could make there. We learned a ton about MongoDB, and the Consulting Engineer also introduced us to some tools so we could diagnose performance issues ourselves. Based on our Consulting Engineer’s recommendations, we changed the structure of how we stored some data and reworked certain queries to improve performance. We also cleaned up a bunch of unnecessary indexes. We had created a number of indexes over the years for different query patterns, and our Consulting Engineer was able to identify which ones could be removed wholesale, and which indexes could be replaced with a single new one to cover different query patterns. We made some optimizations in Atlas as well, moving to a Low CPU instance based on the shape of our workload and changing to a more efficient backup option. With the optimizations recommended in our consulting engagement, we were able to reduce our spend by more than 35%. MongoDB consulting paid for itself in less than a month, which was incredible. I had to develop a business case internally for investing in consulting, and this level of savings made it an easy sell. The knowledge we picked up during our consulting engagement was invaluable. That’s something we’ll carry forward and that will continue to provide benefits. We’re much better at our indexing strategy, for example. Say you’re introducing a new type of query and thinking about adding an index: now we know what questions to ask. How often is this going to be run? Could you change the query to use an existing index, or change an existing index to cover this query? If we decide we need a new index, should we deprecate an old one? With the optimizations recommended in our consulting engagement, we were able to reduce our spend by more than 35%. MongoDB consulting paid for itself in less than a month, which was incredible. Brad Beeler, Lead Architect, Flowhub EH: What advice would you give to someone who's considering MongoDB for their next project? BB: Take the time upfront to understand your data and how it’s going to be used. That’ll give you a good head start for structuring the data in MongoDB, designing queries, and implementing indexes. Obviously, Flex Consulting was very helpful for us on this front, so give that a look.

January 19, 2021

Showingly Transforms Real Estate with MongoDB Atlas and MongoDB Realm

>> Announcement: Some features mentioned below will be deprecated on Sep. 30, 2025. Learn more . Buying or selling a house is difficult. There are more steps than you could imagine, and each one feels harder than it should be. The improvements that technology has made in the past decades seem to have passed the industry by. Showingly is trying to make the process less difficult, starting by making it easier to see houses you might want to buy. Buyers can browse listings and book showings with no fuss. Sellers and their agents can make listings available on the app, making it easier for buyers to find them. Agents and other real estate professionals can simplify their workflows. Showingly built its full stack on MongoDB, using MongoDB Atlas and MongoDB Realm . I caught up with Andrew Coca, Co-Founder and President, to learn more. Tell us a little bit about your company. Showingly is a real estate platform for buyers, sellers, and professionals. Of course, it does everything you would expect: sellers can have their house listed on the platform, buyers can browse listings and see all of the important details, agents can manage their listings, and so on. What sets Showingly apart is that consumers can actually drive the showing process, instead of merely searching for homes. On most real estate platforms, if you find a house you’re interested in, you might see a list of times and dates to schedule an appointment, but you’re not actually booking a showing. Instead, the platform sells that to an agent as a lead, and the agent has to follow up with you – they may or may not be able to show you the house at that time. Showingly is an actual showing platform: when you book a showing, we’re really scheduling it for you in our backend. Until now, the home showing process has been prioritizing agent convenience instead of consumer convenience. Now, for the first time, consumers can have the transparency of directly booking the showings they want. We also have features built for agents and other professionals. For example, agents are able to delegate showings. If you’re too busy, you can find another agent to show one of your listings; on the other side of the coin, if you have spare time, you can turn that into money by picking up delegated showings. We’ve been building Showingly for two years, and we launched publicly five months ago. We’re currently live in Alaska, Arizona, Colorado, Hawaii, Massachusetts, South Carolina, Tennessee, and Utah. We continue to integrate with more multiple listing services (MLSs) to launch into new markets. I have to ask: What has your experience been launching a real estate application during a global pandemic? Early on, it made raising funding a little harder, but we were able to find investors who wanted to invest during COVID. Surprisingly, it then made hiring easier: people who had lost their internships or jobs came to work for us. We grew from a couple engineers to a dozen, and we’ve probably built as much in the last two months as we did in the year before. It hasn’t had an enormous effect at a business level. As you might know, real estate markets have reacted in very different ways to COVID. In the Northeast, the market is cautious, but here in Colorado, it’s booming. What was the genesis of Showingly? We wanted to start Showingly after learning the ins and outs of the industry through real estate sales. I’ve been an agent, I’ve managed a team of agents, and I’ve experienced a lot of parts of the process. Real estate is such an archaic industry. Technology has done so much in the last 10 or 20 years, and the real estate industry hasn’t materially improved. Sure, there have been some new applications, but they don’t fundamentally change the process – they just put the old process into a pretty website. We saw a big opportunity to actually transform the industry. How is Showingly using MongoDB? Showingly is built fully on MongoDB. We’re storing all of our data in MongoDB Atlas, running on AWS. We have one main production cluster, plus a few dev and test clusters. Our application backend is built entirely on MongoDB Realm, using Realm Functions . It’s really nice to have it all in one place. We use functions for data movement, like retrieving listings from an MLS, as well as application functionality. When you take any action in the app – displaying listings, displaying showings, creating a new showing, updating a showing, and so on – that’s calling a Realm function to access the data in MongoDB Atlas. For frontend, we have a cross-platform mobile app built with React Native, as well as a web client for agents and other professionals. It’s easy to connect to Realm and Atlas from those different clients. What made you decide to use MongoDB Atlas and Realm’s application development services? We went to MongoDB World last year, and learned about the document model, MongoDB Atlas, MongoDB Realm, and all of your other products. We knew then that it was the right way to build a simple but powerful architecture. MongoDB has made it easy to get started, but it will also scale with our business: we could go nationwide or worldwide, and MongoDB makes that easy. There’s no reason we would ever need to change our backend. As I mentioned, my background is in real estate, not technology. But over the past couple of years, I’ve gotten quite good at working with MongoDB Realm and MongoDB Atlas. The simplicity of JavaScript on the frontend and MongoDB on the backend makes it an easy stack to work with. And getting expert help from a consulting engineer quickly taught us the best practices for developing with MongoDB. Working with MongoDB let us develop a great application quickly. And how would you describe the benefit of MongoDB to your business? Time to market is certainly part of it, as I mentioned. But I wouldn’t have picked MongoDB just for that. Building for the long term is more important to me than time to market. I wouldn’t take on technical debt up front just to be able to move more quickly. I want to build a structure that lasts, and I’m confident that what we’re building with MongoDB Realm and MongoDB Atlas is just that. You mentioned that the ability to handle scaling in the future was key for you. What are your plans for scale? We actually use auto-scaling in Atlas, so our production cluster automatically scales up and down depending on workload. Right now, it’s usually either an M10 or an M20, fluctuating between them as needed. If and when the workload of our application increases beyond that level, Atlas will continue to scale up to match. It’s so easy to set up auto-scaling, so why do it ourselves? And we know that if we need to move to a multi-region cluster or global cluster, that’s very easy to do. And of course, MongoDB Realm is serverless and we don’t need to worry about scale on that side at all. We just define functions, and they run when needed at any scale. You said you worked with a MongoDB consulting engineer. Can you describe that process? Flex Consulting not only gave us the expert help we needed, but pulled us along the path to being expert MongoDB users ourselves. We’ve had several Flex Consulting engagements in the Design & Develop and Optimize tracks. Flex Consulting has been the key to making the best use of MongoDB Atlas and MongoDB Realm for our application. We actually covered a number of different points in our consulting engagements. First and foremost was getting the schema design right. For example, our consulting engineer helped us model the data structure of listings and showings (e.g., embedding vs. linking information), and how to represent data in a way that matches how the application uses it. Getting that design right the first time definitely helped avoid more work down the road. Advice from the MongoDB engineer also helped us control data quality when multiple people and processes can update records. We fetch listings from MLSs, and if all we had to do was present listings, it would be simple. But of course we’re also dealing with showings tied to listings, we’re enriching those records with other fields for our specific use, and there are cases where an agent might be modifying some of that extra data. So when we refresh listings from the MLS or make updates from the application, we need to make sure that those updates aren’t clobbering other data. Our consulting engineer helped us design Realm functions that would have the correct upsert behavior in all cases. These consulting engagements were almost like school for us. We spent time with our consulting engineer understanding the technology and making the right design decisions, which was super valuable. Flex Consulting not only gave us the expert help we needed, but pulled us along the path to being expert MongoDB users ourselves. What’s next for Showingly? In the short term, it’s about growing use in the markets where Showingly is live, and launching into new markets. On the product side, we’re adding some social elements so that agents can see what their peers are up to. We’re also very eager to do more analysis of our data, integrating machine learning to do things such as improving pricing for some of our agent features. What advice would you give to someone considering MongoDB for their next project? First, take advantage of consulting. MongoDB’s consulting is the best way to build your project not only quickly, but correctly for the long term. Second, you’ll get the biggest benefit from utilizing the whole stack of Realm and Atlas together. There’s an enormous amount of convenience from having everything in one place. In the long term, we want Showingly to be the real estate platform. To date, real estate hasn’t had a good platform that facilitates the process. If you’ve ever bought or sold a home, you know how convoluted it is. You should be able to do everything in a single platform: finding potential homes, getting pre-approvals, scheduling and going on showings, writing an offer, signing contracts and other documents, even closing and getting insurance. We want Showingly to be that platform, to turn a 30-day process into a 3-day process.

December 22, 2020

MongoDB Atlas Arrives in Italy | MongoDB Atlas Arriva in Italia

We’re delighted to announce our first foray into Italy with the launch of MongoDB Atlas on the AWS Europe (Milan) region. MongoDB Atlas is now available in 20 AWS regions around the world, including 6 European regions. Milan is a Recommended Region , meaning it has three Availability Zones (AZ). When you deploy a cluster in Milan, Atlas automatically distributes replicas to the different AZs for higher availability — if there’s an outage in one zone, the Atlas cluster will automatically fail over to keep running in the other two. And you can also deploy multi-region clusters with the same automatic failover built-in. We’re excited that, like customers in France, Germany, the UK, and more, Italian organizations will now be able to keep data in-country, delivering low-latency performance and ensuring confidence in data locality. We’re confident our Italian customers in government, financial services, and utilities in particular will appreciate this capability as they build tools to improve citizens’ lives and better serve their local users. Explore Atlas on AWS Today   In Italian, courtesy of Dominic: Siamo lieti di annunciare la nostra espansione in Italia rendendo disponibile MongoDB Atlas nella regione AWS Europa (Milano). MongoDB Atlas è ora disponibile in 20 regioni AWS nel mondo, comprese 6 regioni europee. Milano è una Recommended Region ; questo significa che ha tre Availability Zones (AZ). Quando viene creato un cluster a Milano, Atlas distribuisce automaticamente le repliche sulle diverse AZ per aumentare la disponibilità e l’affidabilità — nel caso in cui avvenga un disservizio in una zona, il cluster Atlas utilizzerà la funzionalità di failover per restare in esecuzione sulle altre due. Eventualmente è anche possibile creare cluster multi-region che incorporano la stessa logica di failover automatico. Siamo felici che anche le realtà italiane possano scegliere, come i nostri clienti in Francia, Germania, UK, ed altrove, di mantenere i propri dati all’interno dei confini nazionali, dando risposte a bassa latenza ai propri utenti ed assicurando loro la fiducia nella localizzazione fisica dei dati. Siamo sicuri che i nostri clienti in Italia, in particolare nel settore pubblico, nei servizi finanziari, e nelle utilities, apprezzeranno queste nuove possibilità per la creazione di nuovi strumenti per migliorare la vita dei cittadini e servire meglio i loro utenti in Italia. Scopri subito Atlas disponiblie su AWS

November 4, 2020

Announcing Azure Private Link Integration for MongoDB Atlas

We’re excited to announce the general availability of Azure Private Link as a new network access management option in MongoDB Atlas . MongoDB Atlas is built to be secure by default . All dedicated Azure clusters on Atlas are deployed in their own VNET. For network security controls, you already have the options of an IP Access List and VNET Peering . The IP Access List in Atlas offers a straightforward and secure connection mechanism, and all traffic is encrypted with end-to-end TLS. But it requires that you provide static public IPs for your application servers to connect to Atlas, and to list all such IPs in the Access List. And if your applications don’t have static public IPs or if you have strict requirements on outbound database access via public IPs, this won’t work for you. The existing solution to this is VNET Peering, with which you configure a secure peering connection between your Atlas cluster’s VNET and your own VNET(s). This is easy, but the connections are two way. While Atlas never has to initiate connections to your environment, some customers perceive VNET peering as extending the perceived network trust boundary anyway. Although Access Control Lists (ACLs) and security groups can control this access, they require additional configuration. MongoDB Atlas and Azure Private Link Now, you can use Azure Private Link to connect a VNET to MongoDB Atlas. This brings two major advantages: Unidirectional: connections via Private Link use a private IP within the customer’s VNET, and are unidirectional such that the Atlas VNET cannot initiate connections back to the customer's VNET. Hence, there is no extension of the network trust boundary. Transitive: connections to the Private Link private IPs within the customer’s VNET can come transitively from another VNET peered to the Private Link-enabled VNET, or from an on-prem data center connected with ExpressRoute to the Private Link-enabled VNET. This means that customers can connect directly from their on-prem data centers to Atlas without using public IP Access Lists. Azure PrivateLink offers a one-way network peering service between an Azure VNET and a MongoDB Atlas VNET Meeting Security Requirements with Atlas on Azure Azure Private Link adds to the security capabilities that are already available in MongoDB Atlas, like Client Side Field-Level Encryption , database auditing , BYO key encryption with Azure Key Vault integration , federated identity , and more. MongoDB Atlas undergoes independent verification of security and compliance controls , so you can be confident in using Atlas on Azure for your most critical workloads. Ready to try it out? Get started with MongoDB Atlas today! Sign up now

October 15, 2020

Fraud Detection at FICO with MongoDB and Microservices

FICO is more than just the FICO credit score. Founded in 1956, FICO also offers analytics applications for customer acquisition, service, and security, plus tools for decision management. One of those applications is the Falcon Assurance Navigator (FAN), a fraud detection system that monitors purchasing and expenses through the full procurement to pay cycle. Consider an expense report: the entities involved include the reporter, the approver, the vendor, the department or business unit, the expense line items, and more. A single report has multiple line items, where each line may be broken into different expense codes, different budget sources, and so on. This translates into a complicated data model that can be nested 6 or 7 layers deep – a great match for MongoDB’s document model, but quite hard to represent in the tabular model of relational databases. FAN Architecture Overview The fraud detection engine consists of a series of microservices ("Introduction to Microservices and MongoDB") that operate on transactions in queues that are persisted in MongoDB: Each transaction arrives in a receiver service , which places it into a queue. An attachment processor service checks for an attachment; if one exists, it sends it to an OCR service and stores the transaction enriched with the OCR data. A context creator service analyzes it and associates it with any past transactions that are related to it. A decision execution engine runs the rules that have been set up by the client and identifies violations. One or more analytics engines review transactions and flag outliers. Now decorated with a score, the transaction goes to a case manager service , which decides whether to create a case for human follow-up based on any identified issues. At the same time, a notification manager passes updates on the processing of each transaction back to the client’s expense/procurement system. To learn more, watch FICO’s presentation at MongoDB World 2018 .

September 27, 2018

Leaf in the Wild: SilkRoute Chooses MongoDB Over SQL Server for Critical Quality Assurance Platform

Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects. MongoDB chosen for development productivity, operational efficiency with Cloud Manager, and “truly outstanding” professional services From manufacturing to retail, every part of the supply chain is starting to see the value of data. Whether it’s developing IoT quality assurance applications in manufacturing to ensure your products are defect-free or building data-driven customer loyalty programs so that brands can connect with and reward their fans, the top companies are working to improve their approach to data. SilkRoute Global is a software-as-a-service company focused on this industry. Its analytics products automate processes and present consumable, useful information to its customers. To understand the benefits they get from MongoDB, I spoke with Devin Duden, CTO of OmniSky (a division of SilkRoute) & Senior Software Engineer, and Amjad Hussain, CEO & Chief Data Scientist. Tell us a little bit about SilkRoute. SilkRoute is a passionate team of designers, machine learning scientists, and software engineers with tremendous industry knowledge of manufacturing, distribution, and retail. We live for solving big problems. Our industry-specific predictive and prescriptive analytics platform creates immense operational and strategic value for our customers. Our customer footprint is global and growing. Applied machine learning, business process automation, and mobility are woven into the fabric of everything we build. We offer a unique risk-free rapid implementation and integration approach for our customers to enjoy our solutions. Please describe how you’re using MongoDB. The application SilkRoute is building is a mobile application performing RFID inspections on industrial manufactured products. The application provides a centralized data store of customers’ products and the inspections associated with a product, and allows those customers to easily share the inspection records with others. MongoDB was chosen for this application based on: Simplified schema design Increased flexibility for modeling complex relationships (e.g., using MongoDB eliminated recursive relationships necessary in a SQL-based solution) Easier capture of user generated data Reduced development timeline Durability, scalability, and disaster recovery SilkRoute Enterprise mobile RFID inspection architecture What were you using before MongoDB? Was this a new project or did you migrate from a different database? The current version of the application is a client-server implementation using SQL Server as a cloud sharing data store and Windows CE on the mobile device. The application is a rewrite. How did you hear about MongoDB? Did you consider other alternatives, like relational or NoSQL databases? I was introduced to MongoDB three years ago when I started working at SilkRoute. We were working on a social network at the time, which was using MongoDB as its primary data store. The RFID mobile application’s technical requirements were originally to use MS SQL Server. This technical requirement was provided by the client. During our working Joint Application Design session with the client, we suggested using MongoDB, but didn’t make headway. When we attended MongoDB World 2015 , we gathered enough details about MongoDB’s capabilities, along with real-world examples of high-volume, transaction-based applications being developed on MongoDB, that we were able to persuade the client to switch from SQL Server to MongoDB. Please describe your MongoDB deployment, technology stack, and the version of MongoDB that you are running. The MongoDB deployment is a 5 node replica set using Cloud Manager for operational management and deployment. The replica set is deployed in the US East AWS region across all availability zones. At this point, we have not implemented sharding. The MongoDB replica set has been deployed in AWS following MongoDB’s best practices using Amazon Linux AMIs. Each production node will be running on EC2 instances with 16 GB memory and 4 core CPUs, with three 100GB provisioned IOPS EBS volumes. Each volume is XFS format. One volume is mapped for “data”, one volume is mapped for “log”, and one volume is mapped for “journal”. The API stack is written in .NET 5 using C# MVC/Web API framework. We are using the MongoDB .NET driver version 2.0. Are you using any tools to monitor, manage and backup your MongoDB deployment? If so what? Do you use Ops Manager / Cloud Manager? The replica set has been deployed and managed using Cloud Manager. Cloud Manager simplified and streamlined replica set deployment and operations. This solution is the first time the majority of team members used MongoDB. To reduce time spent with MongoDB replica set deployment and configuration, Cloud Manager was a great fit. Following Cloud Manager’s directions to create AWS EC2 instances made it very easy for us to create images, and build/tear down replica sets quickly. Streamlining manual tasks allowed the team to focus more time on development than deploying a fully managed MongoDB replica set. In addition to Cloud Manager, the team just started using MongoDB Compass to analyze collections and document sizes. Are you integrating MongoDB with other data analytics, BI or visualization tools? If so, can you share any details? At this point we have not integrated any BI. One of our objectives is to connect with the client’s BI system using the MongoDB Connector for BI and/or extract data from a tagged node to hydrate a SQL-based BI system. We’re planning to perform a POC on the Connector for BI, now that it has been released. How are you measuring the impact of MongoDB on your business? SilkRoute measures MongoDB’s impact by many factors, including ease of use with deployments, a code first approach, increased agile development model, reduced total cost of ownership, and reduced time to market. The ease of deployments reduces or eliminates maintenance windows when spinning up a replica set or upgrading database versions, which means higher uptime for customers and less productive time eaten up for developers. A code first approach adds to increased savings by eliminating daunting DDL script management and aids with better agile development. These factors result in reduced total cost of ownership and faster time to market. Do you use any commercial subscriptions or services to support your MongoDB deployment? SilkRoute is a MongoDB OEM partner. For the RFID application we will be embedding MongoDB Enterprise Server 3.2 and managing the deployment with Cloud Manager. We allocated a budget for MongoDB’s professional services in the early stages of the project. The professional services were tailored to the team’s skill set and agenda. With two separate onsite sessions, we covered topics from deployment, management, and recovery using Cloud Manager, to schema modeling and scaling. The value gained working hands-on directly with a MongoDB consulting engineer was twice the investment. During one session, we encountered a disaster recovery situation in a non-production environment. Unexpected though the situation was, I personally gained the most from the experience of working through the issue with a MongoDB expert in a very collaborative fashion. The professionalism and knowledge of our MongoDB consulting engineer was truly outstanding. Do you have plans to use MongoDB for other applications? If so, which ones? Yes, both internal initiatives and client initiatives. These include BI, a Warehouse Manager SaaS solution, a customer loyalty/couponing app, and client SaaS solutions, which we are not at liberty to disclose at this point. We would prefer to use MongoDB for all application and system development projects. Our preference to use MongoDB for development is based on ease of use, an emphasis on a code first approach for projects going forward, and built-in scalability and durability. Have you upgraded to MongoDB 3.2? What most excites you about this release? We’ve been developing the solution using MongoDB 3.0.x. We are actively migrating the database to version 3.2.1, and the production deployment will use 3.2.1. The most exciting features of MongoDB 3.2 for us are the BI connector, document validation, $lookup, and WiredTiger's in-memory option. We feel the biggest value add to our clients are the BI connector and the in-memory storage engine. The BI connector will allow our clients’ BI environments to integrate directly with the solution we are building, eliminating the need to write ETL processes from MongoDB to a BI environment. The in-memory storage engine will increase performance with read operations, which will reduce latency with API requests. Anything to increase overall performance is a plus. What advice would you give someone who is considering using MongoDB for their next project? I would highly recommend allocating a budget for MongoDB’s professional services to help with operations, deployment, and schema modeling. The value gained with their best practices approach really reduces learning curves and POC time. Coming from a SQL world, prepare ERDs and break the ERDs into schema designs. This approach will help bridge team members from a relational to a non-relational data store. Take a top-down development approach as it will uncover access patterns that may help with schema modeling. Thank you for sharing your MongoDB experiences with us! If you’re comparing MongoDB with relational databases, read our RDBMS to MongoDB Migration Guide to learn more. Read the RDBMS to MongoDB Migration Guide About the Author - Eric Holzhauer Eric is a Product Marketing Manager at MongoDB.

January 26, 2016