I am trying to bulk insert documents in my collection knowing that there will be some duplicates. I need to ensure that there aren’t any duplicates so i have created a compound unique index to ensure that and to make queries faster for my use case.
The problem is whenever i do the bulkWrite insert (using insertOne) with the ordered option set to false, the duplicate key error is caught by my try/catch block instead of continuing executions so that i can get the writeErrors from the result object just like the documentation says. With trial and error, i have discovered that the error object has a writeErrors field that contains a lot of my errors. I just need to know for sure that this is the expected behavior so that i can confidently leverage this method.
It seems very ambiguous as the documentation clearly says that the unordered BulkWrite won’t throw an error for individual operations. Any help is appreciated
Skip to main content
New & Unread Topics
Topic | Replies | Views | Activity |
---|---|---|---|
Schema Design with Mongoose | 0 | 137 | Aug 2024 |
Group stage in aggregation is being skipped | 9 | 286 | Sep 2024 |
Database returning a 204 No Content | 0 | 24 | Sep 2024 |
Mongoose GEOJSON Can’t extract geo keys - Point must only contain numeric elements, instead got type object | 0 | 83 | Oct 2024 |
Behaviour of MongoDbClient when no space left | 0 | 19 | Feb 18 |