However the $match stage does not use my existing index on “last_processed_time”, no matter what. The explain plan always shows a COLLSCAN. If I use a index hint, it’s not using the index either, but add a new filter stage.
Yes, new Date() on the client may differs from the server $$NOW, but for day related use-case it should not make a big difference. And without the $expr, the index should be used.
Thanks for the suggestion on calculating the date using JS. Unfortunately I need to create a view from the aggregation query, which does not support JS as far as I know.
The view is used for Atlas SQL which is loaded into PowerBI. Since the collection 1TB+ loading and filtering in PowerBI is not an option.
Like I said, the view is in use for ~2 years, and in the past the $expr stage did use the index just fine. I’m wonding if MongoDB changed something without us noticing, or if it is a bug.
I completely missed the parts where you have mentioned that it was a view and that it has been working for 2 years.
Please share the indexes you have. I asked since it was working and now it is not, then may be during the server migration the index were not recreated. How many documents do you have in your collection?
As a workaround, it should be easy to write a JS script that update the view.
We have quite an amount of indices on our collection. However we use the $expr stage on multiple collections, but on all our views the indices are not used anymore.
We have views that should use the ix_last_processed_time and ci_CPW.TIMESTAMP.ts_CPW.NAME indices.
Sorry forgot to mention, but it is happening on two separated clusters aswell. Both clusters aswell. And the collections range from 2.5 million docs to 50 million docs. So we experience the issue on 2 clusters and 4 collections in total.
As a work around adding last_processed_time:{$gt: MinKey()} to the match will use the index but it will still scan more keys than what I observed in 6.0
A preceeding {$sort:{last_processed_time:1}} triggers that same plan as the $gt .