3 / 3
Mar 6

Got recently log about slow query.
Part of mine explained pipeline looks like it

// previous stages with 1st match { '$project': [Object], nReturned: 1436, executionTimeMillisEstimate: 6 }, { '$lookup': [Object], totalDocsExamined: 1436, totalKeysExamined: 1436, collectionScans: 0, indexesUsed: [Array], nReturned: 1436, executionTimeMillisEstimate: 112 }, // other stages with ~112 ms each

And $lookup

{ '$lookup': { from: 'users', as: 'user', localField: 'userId', foreignField: '_id' }, totalDocsExamined: 1434, totalKeysExamined: 1434, collectionScans: 0, indexesUsed: [ '_id_' ], nReturned: 1434, executionTimeMillisEstimate: 112 }

Users collection about 3 million documents. Is there nothing i can do? Except return less documents from $match stage (which one was first in pipeline). Is this a normal behavior? Should i expand server? If so, cpu or memory will be better choice?

Mongo version ^7

Is there some way to reduce the total number of documents examined? For example, is there a way to $match ahead of your $lookup to reduce the total number of documents that need to be examined?

There is no way atm, need to change apps architecture.
Still, this is a normal behavior, right?