Docs Menu
Docs Home
/
MongoDB Manual
/ / /

db.collection.aggregate()

On this page

  • Definition
  • Compatibility
  • Syntax
  • Behavior
  • Examples

MongoDB with drivers

This page documents a mongosh method. To see the equivalent method in a MongoDB driver, see the corresponding page for your programming language:

C#Java SyncNode.jsPyMongoCC++GoJava RSKotlin CoroutineKotlin SyncPHPMotorMongoidRustScalaSwift
db.collection.aggregate(pipeline, options)

Calculates aggregate values for the data in a collection or a view.

Returns:
  • A cursor for the documents produced by the final stage of the aggregation pipeline.
  • If the pipeline includes the explain option, the query returns a document that provides details on the processing of the aggregation operation.
  • If the pipeline includes the $out or $merge operators, the query returns an empty cursor.

You can use db.collection.aggregate() for deployments hosted in the following environments:

  • MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud

The aggregate() method has the following form:

db.collection.aggregate( <pipeline>, <options> )

The aggregate() method takes the following parameters:

Parameter
Type
Description
pipeline
array
A sequence of data aggregation operations or stages. See the

aggregation pipeline operators for details.

The method can still accept the pipeline stages as separate arguments instead of as elements in an array; however, if you do not specify the pipeline as an array, you cannot specify the options parameter.

options
document
Optional. Additional options that aggregate() passes
to the aggregate command. Available only if you specify the pipeline as an array.

The options document can contain the following fields and values:

Changed in version 5.0.

Field
Type
Description
explain
boolean

Optional. Specifies to return the information on the processing of the pipeline. See Return Information on Aggregation Pipeline Operation for an example.

Not available in multi-document transactions.

allowDiskUse
boolean

Optional. Enables writing to temporary files. When set to true, aggregation operations can write data to the _tmp subdirectory in the dbPath directory. See Interaction with allowDiskUseByDefault for an example.

The profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.

cursor
document
Optional. Specifies the initial batch size for the cursor. The value of the cursor field is a document with the field batchSize. See Specify an Initial Batch Size for syntax and example.
maxTimeMS
non-negative integer

Optional. Specifies a time limit in milliseconds for processing operations on a cursor. The default value is 60000 milliseconds, or 60 seconds. If you explicitly set the value to 0, operations will not time out.

MongoDB terminates operations that exceed their allotted time limit using the same mechanism as db.killOp(). MongoDB only terminates an operation at one of its designated interrupt points.

bypassDocumentValidation
boolean

Optional. Applicable only if you specify the $out or $merge aggregation stages.

Enables db.collection.aggregate() to bypass document validation during the operation. This lets you insert documents that do not meet the validation requirements.

readConcern
document

Optional. Specifies the read concern.

The readConcern option has the following syntax: readConcern: { level: <value> }

Possible read concern levels are:

For more formation on the read concern levels, see Read Concern Levels.

The $out stage cannot be used in conjunction with read concern "linearizable". If you specify "linearizable" read concern for db.collection.aggregate(), you cannot include the $out stage in the pipeline.

The $merge stage cannot be used in conjunction with read concern "linearizable". That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include the $merge stage in the pipeline.

document

Optional.

Specifies the collation to use for the operation.

Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.

The collation option has the following syntax:

collation: {
locale: <string>,
caseLevel: <boolean>,
caseFirst: <string>,
strength: <int>,
numericOrdering: <boolean>,
alternate: <string>,
maxVariable: <string>,
backwards: <boolean>
}

When specifying collation, the locale field is mandatory; all other collation fields are optional. For descriptions of the fields, see Collation Document.

If the collation is unspecified but the collection has a default collation (see db.createCollection()), the operation uses the collation specified for the collection.

If no collation is specified for the collection or for the operations, MongoDB uses the simple binary comparison used in prior versions for string comparisons.

You cannot specify multiple collations for an operation. For example, you cannot specify different collations per field, or if performing a find with a sort, you cannot use one collation for the find and another for the sort.

hint
string or document

Optional. The index to use for the aggregation. The index is on the initial collection/view against which the aggregation is run.

Specify the index either by the index name or by the index specification document.

Note

The hint does not apply to $lookup and $graphLookup stages.

comment
string
Optional. Users can specify an arbitrary string to help trace the operation through the database profiler, currentOp, and logs.
writeConcern
document

Optional. A document that expresses the write concern to use with the $out or $merge stage.

Omit to use the default write concern with the $out or $merge stage.

let
document

Optional.

Specifies a document with a list of variables. This allows you to improve command readability by separating the variables from the query text.

The document syntax is:

{
<variable_name_1>: <expression_1>,
...,
<variable_name_n>: <expression_n>
}

The variable is set to the value returned by the expression, and cannot be changed afterwards.

To access the value of a variable in the command, use the double dollar sign prefix ($$) together with your variable name in the form $$<variable_name>. For example: $$targetTotal.

To use a variable to filter results in a pipeline $match stage, you must access the variable within the $expr operator.

For a complete example using let and variables, see Use Variables in let.

New in version 5.0.

If an error occurs, the aggregate() helper throws an exception.

In mongosh, if the cursor returned from the db.collection.aggregate() is not assigned to a variable using the var keyword, then mongosh automatically iterates the cursor up to 20 times. See Iterate a Cursor in mongosh for handling cursors in mongosh.

Cursors returned from aggregation only supports cursor methods that operate on evaluated cursors (i.e. cursors whose first batch has been retrieved), such as the following methods:

For more information, see:

For cursors created inside a session, you cannot call getMore outside the session.

Similarly, for cursors created outside of a session, you cannot call getMore inside a session.

MongoDB drivers and mongosh associate all operations with a server session, with the exception of unacknowledged write operations. For operations not explicitly associated with a session (i.e. using Mongo.startSession()), MongoDB drivers and mongosh create an implicit session and associate it with the operation.

If a session is idle for longer than 30 minutes, the MongoDB server marks that session as expired and may close it at any time. When the MongoDB server closes the session, it also kills any in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.

For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Mongo.startSession() and periodically refresh the session using the refreshSessions command. See Session Idle Timeout for more information.

db.collection.aggregate() can be used inside distributed transactions.

However, the following stages are not allowed within transactions:

You also cannot specify the explain option.

  • For cursors created outside of a transaction, you cannot call getMore inside the transaction.

  • For cursors created in a transaction, you cannot call getMore outside the transaction.

Important

In most cases, a distributed transaction incurs a greater performance cost over single document writes, and the availability of distributed transactions should not be a replacement for effective schema design. For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. That is, for many scenarios, modeling your data appropriately will minimize the need for distributed transactions.

For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.

For db.collection.aggregate() operation that do not include the $out or $merge stages:

Starting in MongoDB 4.2, if the client that issued db.collection.aggregate() disconnects before the operation completes, MongoDB marks db.collection.aggregate() for termination using killOp.

The following examples use the collection orders that contains the following documents:

db.orders.insertMany( [
{ _id: 1, cust_id: "abc1", ord_date: ISODate("2012-11-02T17:04:11.102Z"), status: "A", amount: 50 },
{ _id: 2, cust_id: "xyz1", ord_date: ISODate("2013-10-01T17:04:11.102Z"), status: "A", amount: 100 },
{ _id: 3, cust_id: "xyz1", ord_date: ISODate("2013-10-12T17:04:11.102Z"), status: "D", amount: 25 },
{ _id: 4, cust_id: "xyz1", ord_date: ISODate("2013-10-11T17:04:11.102Z"), status: "D", amount: 125 },
{ _id: 5, cust_id: "abc1", ord_date: ISODate("2013-11-12T17:04:11.102Z"), status: "A", amount: 25 }
] )

The following aggregation operation selects documents with status equal to "A", groups the matching documents by the cust_id field and calculates the total for each cust_id field from the sum of the amount field, and sorts the results by the total field in descending order:

db.orders.aggregate( [
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
] )

The operation returns a cursor with the following documents:

[
{ _id: "xyz1", total: 100 },
{ _id: "abc1", total: 75 }
]

mongosh iterates the returned cursor automatically to print the results. See Iterate a Cursor in mongosh for handling cursors manually in mongosh.

The following example uses db.collection.explain() to view detailed information regarding the execution plan of the aggregation pipeline.

db.orders.explain().aggregate( [
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
] )

The operation returns a document that details the processing of the aggregation pipeline. For example, the document may show, among other details, which index, if any, the operation used. [1] If the orders collection is a sharded collection, the document would also show the division of labor between the shards and the merge operation, and for targeted queries, the targeted shards.

Note

The intended readers of the explain output document are humans, and not machines, and the output format is subject to change between releases.

You can view more verbose explain output by passing the executionStats or allPlansExecution explain modes to the db.collection.explain() method.

[1] Index Filters can affect the choice of index used. See Index Filters for details.

Starting in MongoDB 6.0, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default. These temporary files last for the duration of the pipeline execution and can influence storage space on your instance. In earlier versions of MongoDB, you must pass { allowDiskUse: true } to individual find and aggregate commands to enable this behavior.

Individual find and aggregate commands can override the allowDiskUseByDefault parameter by either:

  • Using { allowDiskUse: true } to allow writing temporary files out to disk when allowDiskUseByDefault is set to false

  • Using { allowDiskUse: false } to prohibit writing temporary files out to disk when allowDiskUseByDefault is set to true

The profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.

For more information, see Aggregation Pipeline Limits.

To specify an initial batch size for the cursor, use the following syntax for the cursor option:

cursor: { batchSize: <int> }

For example, the following aggregation operation specifies the initial batch size of 0 for the cursor:

db.orders.aggregate(
[
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } },
{ $limit: 2 }
],
{
cursor: { batchSize: 0 }
}
)

The { cursor: { batchSize: 0 } } document, which specifies the size of the initial batch size, indicates an empty first batch. This batch size is useful for quickly returning a cursor or failure message without doing significant server-side work.

To specify batch size for subsequent getMore operations (after the initial batch), use the batchSize field when running the getMore command.

mongosh iterates the returned cursor automatically to print the results. See Iterate a Cursor in mongosh for handling cursors manually in mongosh.

Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.

A collection restaurants has the following documents:

db.restaurants.insertMany( [
{ _id: 1, category: "café", status: "A" },
{ _id: 2, category: "cafe", status: "a" },
{ _id: 3, category: "cafE", status: "a" }
] )

The following aggregation operation includes the collation option:

db.restaurants.aggregate(
[ { $match: { status: "A" } }, { $group: { _id: "$category", count: { $sum: 1 } } } ],
{ collation: { locale: "fr", strength: 1 } }
);

Note

If performing an aggregation that involves multiple views, such as with $lookup or $graphLookup, the views must have the same collation.

For descriptions on the collation fields, see Collation Document.

Create a collection food with the following documents:

db.food.insertMany( [
{ _id: 1, category: "cake", type: "chocolate", qty: 10 },
{ _id: 2, category: "cake", type: "ice cream", qty: 25 },
{ _id: 3, category: "pie", type: "boston cream", qty: 20 },
{ _id: 4, category: "pie", type: "blueberry", qty: 15 }
] )

Create the following indexes:

db.food.createIndex( { qty: 1, type: 1 } );
db.food.createIndex( { qty: 1, category: 1 } );

The following aggregation operation includes the hint option to force the usage of the specified index:

db.food.aggregate(
[ { $sort: { qty: 1 }}, { $match: { category: "cake", qty: 10 } }, { $sort: { type: -1 } } ],
{ hint: { qty: 1, category: 1 } }
)

Use the readConcern option to specify the read concern for the operation.

You cannot use the $out or the $merge stage in conjunction with read concern "linearizable". That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include either stages in the pipeline.

The following operation on a replica set specifies a Read Concern of "majority" to read the most recent copy of the data confirmed as having been written to a majority of the nodes.

Note

  • To ensure that a single thread can read its own writes, use "majority" read concern and "majority" write concern against the primary of the replica set.

  • You can specify read concern level "majority" for an aggregation that includes an $out stage.

  • Regardless of the read concern level, the most recent data on a node may not reflect the most recent version of the data in the system.

db.restaurants.aggregate(
[ { $match: { rating: { $lt: 5 } } } ],
{ readConcern: { level: "majority" } }
)

A collection named movies contains documents formatted as such:

db.movies.insertOne(
{
_id: ObjectId("599b3b54b8ffff5d1cd323d8"),
title: "Jaws",
year: 1975,
imdb: "tt0073195"
}
)

The following aggregation operation finds movies created in 1995 and includes the comment option to provide tracking information in the logs, the db.system.profile collection, and db.currentOp.

db.movies.aggregate( [ { $match: { year : 1995 } } ], { comment : "match_all_movies_from_1995" } ).pretty()

On a system with profiling enabled, you can then query the system.profile collection to see all recent similar aggregations, as shown below:

db.system.profile.find( { "command.aggregate": "movies", "command.comment" : "match_all_movies_from_1995" } ).sort( { ts : -1 } ).pretty()

This will return a set of profiler results in the following format:

{
"op" : "command",
"ns" : "video.movies",
"command" : {
"aggregate" : "movies",
"pipeline" : [
{
"$match" : {
"year" : 1995
}
}
],
"comment" : "match_all_movies_from_1995",
"cursor" : {
},
"$db" : "video"
},
...
}

An application can encode any arbitrary information in the comment in order to more easily trace or identify specific operations through the system. For instance, an application might attach a string comment incorporating its process ID, thread ID, client hostname, and the user who issued the command.

New in version 5.0.

To define variables that you can access elsewhere in the command, use the let option.

Note

To filter results using a variable in a pipeline $match stage, you must access the variable within the $expr operator.

Create a collection cakeSales containing sales for cake flavors:

db.cakeSales.insertMany( [
{ _id: 1, flavor: "chocolate", salesTotal: 1580 },
{ _id: 2, flavor: "strawberry", salesTotal: 4350 },
{ _id: 3, flavor: "cherry", salesTotal: 2150 }
] )

The following example:

  • retrieves the cake that has a salesTotal greater than 3000, which is the cake with an _id of 2

  • defines a targetTotal variable in let, which is referenced in $gt as $$targetTotal

db.cakeSales.aggregate(
[
{ $match: {
$expr: { $gt: [ "$salesTotal", "$$targetTotal" ] }
} }
],
{ let: { targetTotal: 3000 } }
)

Back

Collections