We’re having a collection with vehicles, size is currently ~111.00 documents - increasing.
Each time a vehicle is modified, we store an entry in a vehicle-history-log collection.
We limit the history documents for each vehicle to lets say 100 (configurable).
Currently we’re doing that very ineffective in our application:
- Load all history documents for a vehicle sorted by date ascending
- If number of docs is >= the max. number, delete the oldest one
- Insert a new history document
This has several flaws that I want to overcome, some optimizations are obvious like the following:
-
Load all documents
→ Only load the _id -
Once the max. number of history documents is reached, each time a new one is inserted one has to be deleted
→ Instead allowing 100 docs allow e.g. 150 and when this number is reached delete 50 docs at once
But all this is fiddling around in our application with at least 2 database requests.
At peeks a history doc is inserted up to 1.000 times per second for different vehicles.
So my question is whether there’s another, more resource efficient way to do that?
- Maybe some kind of aggregation that on the one hand inserts the new document and on the other hand deletes the old ones.
- Or are there other ways to accomplish that?
Thanks for every idea!