2 / 3
Jun 2024

We’re having a collection with vehicles, size is currently ~111.00 documents - increasing.
Each time a vehicle is modified, we store an entry in a vehicle-history-log collection.
We limit the history documents for each vehicle to lets say 100 (configurable).

Currently we’re doing that very ineffective in our application:

  1. Load all history documents for a vehicle sorted by date ascending
  2. If number of docs is >= the max. number, delete the oldest one
  3. Insert a new history document

This has several flaws that I want to overcome, some optimizations are obvious like the following:

  • Load all documents
    → Only load the _id

  • Once the max. number of history documents is reached, each time a new one is inserted one has to be deleted
    → Instead allowing 100 docs allow e.g. 150 and when this number is reached delete 50 docs at once

But all this is fiddling around in our application with at least 2 database requests.
At peeks a history doc is inserted up to 1.000 times per second for different vehicles.

So my question is whether there’s another, more resource efficient way to do that?

  • Maybe some kind of aggregation that on the one hand inserts the new document and on the other hand deletes the old ones.
  • Or are there other ways to accomplish that?

Thanks for every idea!

If the history documents are small enough, you could store them in an array and then use the $push, $each and $slice operators to limit the array’s size.

Hi Peter,
thanks for your reply.
It depends what ‘small’ means … in fact they are ~30kb and it’s configurable how many of them should be kept before deleting older once - currently 500 so it will reach the 16 MB limit …

In fact I now changed our application to only request the _id instead of the whole document (which is senseless …).
This should solve the problem … we’ll see … just rolled out on PROD 5 minutes ago …