Atlas Stream Processing features

Explore what makes Atlas Stream Processing powerful and easy to use.

An easy-to-use stream processing experience

MongoDB Atlas Stream Processing was designed to feel familiar and easy to use. It makes continuously processing data streams as easy as it is to use MongoDB databases. By building on the document model and extending the MongoDB Query API with operators that support stream processing use cases, we’re helping developers easily build applications that continuously react to the world around us.

Integrated natively into MongoDB Atlas

Atlas Stream Processing is a fully managed, globally available MongoDB Atlas service.

mdb_document_model

Built on the document model

As with the database itself, streaming data requires flexibility, adaptation, and evolution. The document model is the best way to do it.

Learn more
connectors_kafka

Integrates MongoDB and Apache Kafka

Easily connect to your key streaming sources and sinks in Kafka and Atlas, and merge data continuously.

Learn more
atlas_query_api

Extends the MongoDB Query API

The Query API and aggregation framework include extended capabilities for processing complex, continuous data streams.

Learn more

Powerful processing capabilities

Working with streaming data or data in motion is different from working with data at rest in a database. It tends to be highly varied, heterogeneous, and flows in high volumes and velocity. It requires flexibility and support for continuous processing capabilities that enable near real-time product experiences.

mdb_aggregation_pipelines

Continuous processing

Create time-based windows, lookups across collections, and complex validation for rich, multi-event processing.

Learn more
general_features_complexity

Continuous validation

Perform continuous schema validation and detect message corruption or late-arriving data that has missed a processing window.

Learn more
general_action_best_practices

Continuous merge

Continuously materialize views into Atlas collections or streaming systems like Apache Kafka to maintain fresh analytical views.

Learn more
general_action_checkmark

Intelligent checkpointing

Checkpoints capture the state of operations after completion. Easily restart stream processors after an interruption.

Learn more
mdb_shell

Interactive development experience

Processing streaming data can be opaque. Use .process() to iteratively explore new stream processing pipelines and build quickly.

Learn more
general_features_flexibility

Flexible deployment options

Build out your stream processing infrastructure via the MongoDB shell or automate your stream processors using Terraform.

Deploy with Terraform

Ready to get started?

Check out a tutorial to get started creating a stream processor today.
GET STARTED TODAY
  • Easily integrate Kafka & MongoDB
  • Process data continuously
  • Native MongoDB experience
  • Available globally