Atlas Stream Processing is Now in Public Preview

Clark Gates-George and Joe Niemiec

#Atlas Stream Processing

Update May 2, 2024: Atlas Stream Processing is now generally available. Read our blog to learn more.

This post is also available in: Deutsch, Français, Español, Português, Italiano, 한국인, 简体中文.

Today, we’re excited to announce that Atlas Stream Processing is now in public preview. Any developer on Atlas interested in giving it a try has full access. Learn more in our docs or get started today.

Listen to the MongoDB Podcast to learn about the Atlas Stream Processing public preview from Head of Streaming Products, Kenny Gorman.


Developers love the flexibility and ease of use of the document model, alongside the Query API, which allows them to work with data as code in MongoDB Atlas. With Atlas Stream Processing, we are bringing these same foundational principles to stream processing.

A report covering the topic published by S&P Global Market Intelligence 451 Research had this to say, “A unified approach to leveraging data for application development — the direction of travel for MongoDB — is particularly valuable in the context of stream processing where operational and development complexity has proven a significant barrier to adoption."

First announced at .local NYC 2023, Atlas Stream Processing is redefining the experience of aggregating and enriching streams of high velocity, rapidly changing event data, and unifying how to work with data in motion and at rest.

How are developers using the product so far? And what have we learned?

During the private preview, we saw thousands of development teams request access and we have gathered useful feedback from hundreds of engaged teams. One of those engaged teams is the marketing technology leader, Acoustic:

"At Acoustic, our key focus is to empower brands with behavioral insights that enable them to create engaging, personalized customer experiences. To do so, our Acoustic Connect platform must be able to efficiently process and manage millions of marketing, behavioral, and customer signals as they occur. With Atlas Stream Processing, our engineers can leverage the skills they already have from working with data in Atlas to process new data continuously, ensuring our customers have access to real-time customer insights."
John Riewerts, EVP, Engineering at Acoustic

Other interesting use cases include:

  • A leading global airline using complex aggregations to rapidly process maintenance and operations data, ensuring on-time flights for their thousands of daily customers,

  • A large manufacturer of energy equipment using Atlas Stream Processing to enable continuous monitoring of high-volume pump data to avoid outages and optimize their yields, and

  • An innovative enterprise SaaS provider leveraging the rich processing capabilities in Atlas Stream Processing to deliver timely and contextual in-product alerts to drive improved product engagement.

These are just a few of the many use-case examples that we’re seeing across industries. Beyond the use cases we’ve already seen, developers are giving us tons of insight into what they’d like to see us add to in the future.

In addition to enabling continuous processing of data in Atlas databases through change streams, it’s exciting to see developers using Atlas Stream Processing with their Kafka data hosted by valued partners like Confluent, Amazon MSK, Azure Event Hubs, and Redpanda. Our aim with developer data platform capabilities in Atlas has always been to make for a better experience across the key technologies relied on by developers.

What’s new in the public preview?

That brings us to what’s new. As we scale to more teams, we’re expanding functionality to include the most requested feedback gathered in our private preview. From the many pieces of feedback received, three common themes emerged:

  1. Refining the developer experience

  2. Expanding advanced features and functionality

  3. Improving operations and security

Refining the developer experience

In private preview, we established the core of the developer experience that is essential to making Atlas Stream Processing a natural solution for development teams. And in public preview, we’re doubling down on this by making two additional enhancements:

  • VS Code integration
    The MongoDB VS Code plugin has added support for connecting to Stream Processing instances. For developers already leveraging the plugin, teams can create and manage processors in a familiar development environment. This means less time switching between tools and more time building your applications!

  • Improved dead letter queue (DLQ) capabilities
    DLQ support is a key element for powerful stream processing and in public preview, we’re expanding DLQ capabilities. DLQ messages will now display themselves when executing pipelines with sp.process() and when running .sample() on running processors, allowing for a more streamlined development experience that does not require setting up a target collection to act as a DLQ.

Expanding advanced features and functionality

Atlas Stream Processing already supported many of the key aggregation operators developers are familiar with in the Query API used with data at rest. We've now added powerful windowing capabilities and the ability to easily merge and emit data to an Atlas database or to a Kafka topic. Public preview will add even more functionality demanded by the most advanced teams relying on stream processing to deliver customer experiences:

  • $lookup
    Developers can now enrich documents being processed in a stream processor with data from remote Atlas clusters, performing joins against fields from the document and the target collection.

  • Change streams pre- and post-imaging
    Many developers are using Atlas Stream Processing to continuously process data in Atlas databases as a source through change streams. We have enhanced the change stream $source in public preview with support for pre-and post-images. This enables common use cases where developers need to calculate deltas between fields in documents as well as use cases requiring access to the full contents of a deleted document.

  • Conditional routing with dynamic expressions in merge and emit stages
    Conditional routing lets developers use the value of fields in documents being processed in Atlas Stream Processing to dynamically send specific messages to different Atlas collections or Kafka topics. The $merge and $emit stages also now support the use of dynamic expressions. This makes it possible to use the Query API for use cases requiring the ability to fork messages to different collections or topics as needed.

  • Idle stream timeouts
    Streams without advancing watermarks due to a lack of inbound data can now be configured to close after a period of time emitting the results of the windows. This can be critical for streaming sources that have inconsistent flows of data.

Improving operations and security

Finally, we have invested heavily over the past few months in improving other operational and security aspects of Atlas Stream Processing. A few of the highlights include:

  • Checkpointing
    Atlas Stream Processing now performs checkpoints for saving a state while processing. Stream processors are continuously running processes, so whether due to a data issue or infrastructure failure, they require an intelligent recovery mechanism. Checkpoints make it easy to resume your stream processors from wherever data stopped being collected and processed.

  • Terraform provider support
    Support for the creation of connections and stream processing instances (SPIs) is now available with Terraform. This allows for infrastructure to be authored as code for repeatable deployments.

  • Security roles
    Atlas Stream Processing has added a project-level role, giving users just enough permission to perform their stream processing tasks. Stream processors can run under the context of a specific role, supporting a least privilege configuration.

  • Auditing
    Atlas Stream Processing can now audit authentication attempts and actions within your Stream Processing Instance giving you insight into security-related events.

  • Kafka consumer group support
    Stream processors in now use Kafka consumer groups for offset tracking. This allows users to easily change the position of the processor in the stream for operations and easily monitor for potential processor lag.

A final note on what’s new is that in public preview, we will begin charging for Atlas Stream Processing, using preview pricing (subject to change). You can learn more about pricing in our documentation.

Build your first stream processor today

Public preview is a huge step forward for us as we expand the developer data platform and enable more teams with a stream processing solution that simplifies the operational complexity of building reactive, responsive, event-driven applications, while also offering an improved developer experience.

We can’t wait to see what you build!

Login today or get started with the tutorial, view our resources, or follow the Learning Byte on MongoDB University.