Resources

Online Vs Offline Big Data

Big data is a big buzzword these days but many are not sure exactly what it means or how to determine a good strategy for their data. Being able to define your data management strategy can mean the difference between being a market winner or loser. So what is Big Data? Simply put, it refers to data sets so massive in volume and complexity that they cannot be effectively managed by traditional software tools. The effort to harness Big Data involves many new technologies that handle data creation, storage, retrieval and analysis of data. Technologies that support Big Data typically fall into two classes: Online Big Data technologies and Offline Big Data technologies. Online Big Data systems offer operational capabilities for real-time, interactive workloads where data is ingested and stored. Examples of these applications include social networking news feeds, real-time ad servers, analytics tools, and CRM applications. In contrast, Offline Big Data systems offer analytical capabilities for retrospective, sophisticated analyses that may touch most or all of the data. Hadoop is an example of an Offline Big Data technology. But online vs offline Big Data isn’t about figuring out which one you need over the other. You most likely need both. MongoDB is an online Big Data technology that serves as an operational data store for today’s Big Data applications. The database also integrates well with many offline Big Data solutions so that you can come up with a complete data solution. Enterprises from startups to large corporations are relying on MongoDB to get them to production on their Big Data applications faster and with less effort and risk. Download our white paper on Big Data to learn more about the differences between Online Vs Offline Big Data and much more.

View white paper →

Open Source Data Analysis Tools

Data is only as useful as the insights you can get from it. But in today’s era of Big Data where data is growing exponentially and at warp speed, companies are finding it really hard to make meaning of all the variety of data sitting across different systems. The enterprises who develop a smart data management strategy and make use of the best Big Data technologies have the leg up on others in their respective industries. To achieve that advantage, the first thing you need to do is evaluate the technologies exist out there to help you grapple with Big Data analytics. The technologies typically fall into one of two categories: online and offline Big Data. Online Big Data solutions are the MongoDB’s of the world, the operational databases that ingest and store data in real-time for your applications. Offline Big Data solutions, such as Hadoop, complement your online Big Data technologies by processing your data in batch so you can perform retrospective analysis for your operational data and more. You likely need to employ a mix of both to develop a sophisticated analytics platform. Most of these technologies tend to follow the open source model, as is typical these days with modern software. MongoDB is the most popular database for the modern era of Big Data applications. More than a third of Fortune 100 companies and hundreds of thousands of users can attest to MongoDB’s strengths in handling any type of data with flexibility, agility, and at a lower cost. In addition, MongoDB connects seamlessly through a new connector to industry-standard business intelligence and data visualization tools including Tableau, SAP Business Objects, Qlik, and IBM Cognos Business Intelligence. An extensive partner network of over 1,000 providers are also available to help you build a powerful analytics engine with MongoDB. Find out more about how MongoDB helps organizations of all sizes work successfully with open source data analysis tools. Download our white paper today.

View white paper →

Operational Vs Analytical Big Data

In this modern era of Big Data where data is getting too unwieldy for older generations of technology to handle, there’s a new class of technologies sprouting up to meet the need. To succeed and pull away from the competition, you need a strong data management strategy that involves the right mix of technologies that meet your requirements. These new technologies that have arisen in response to Big Data handle data creation and storage, retrieving and analyzing data. When you’re evaluating the different technologies to use, you typically encounter operational vs. analytical Big Data solutions. Operational Big Data systems provide operational features to run real-time, interactive workloads that ingest and store data. MongoDB is a top technology for operational Big Data applications with over 10 million downloads of its open source software. Analytical Big Data technologies, on the other hand, are useful for retrospective, sophisticated analytics of your data. Hadoop is the most popular example of an Analytical Big Data technology. But picking an operational vs analytical Big Data solution isn’t the right way to think about the challenge. They are complementary technologies and you likely need both to develop a complete Big Data solution. MongoDB works well with Hadoop thanks to an API integration that makes it easy to integrate the two solutions. Many of our customers, such as the City of Chicago, have built amazing applications never before possible as a result of combining operational and analytical technologies. Download our white paper on Big Data to learn more about the differences between operational vs analytical Big Data and much more.

View white paper →

Designing A Database Schema

Designing a database schema is one of the important first steps in the design phase of an application. Choosing the right one not only affects the application’s performance, it also determines how flexible your application is to adapt to future requirements or evolving business needs. Schema design in traditional relational databases must be determined upfront before any data can be added to the application. This inflexible approach makes it hard to update your application to meet new data types. If you can’t keep up in today’s world of Big Data and ever evolving data types you risk losing out to the competition. When designing a database schema for your application, consider using the flexible schema design of the newer generation of NoSQL databases. A flexible data model is better suited to handle the large volume and variety of data typically generated by modern applications. A flexible data model lets you: Quickly adapt to the evolving needs of the business Simplify combining data from multiple sources Save time updating existing data and incorporating new types of data into your system MongoDB, the leading NoSQL database, offers a document data model that allows for iterative and adaptive data modeling without a predefined schema. With MongoDB, you can dynamically modify the schema without interruption, simplify your design and reduce the overall effort to develop applications. To learn more about designing a database schema with MongoDB read the white paper or contact us.

View white paper →