I am trying to bulk insert documents in my collection knowing that there will be some duplicates. I need to ensure that there aren’t any duplicates so i have created a compound unique index to ensure that and to make queries faster for my use case.
The problem is whenever i do the bulkWrite insert (using insertOne) with the ordered option set to false, the duplicate key error is caught by my try/catch block instead of continuing executions so that i can get the writeErrors from the result object just like the documentation says. With trial and error, i have discovered that the error object has a writeErrors field that contains a lot of my errors. I just need to know for sure that this is the expected behavior so that i can confidently leverage this method.
It seems very ambiguous as the documentation clearly says that the unordered BulkWrite won’t throw an error for individual operations. Any help is appreciated
Skip to main content
New & Unread Topics
Topic | Replies | Views | Activity |
---|---|---|---|
So, there is a field which is an array of objects of keys “date”, “duration”, and “equipment”. I want to filter that array of objects according the date using aggregation pipeline | 3 | 503 | May 2024 |
useDB and connection pooling | 2 | 765 | May 2024 |
Database returning a 204 No Content | 0 | 25 | Sep 2024 |
Limitations with querying mongodb timeseries and metadata? | 0 | 21 | Dec 2024 |
Best Approach for Dynamic Custom Collections in a Multi-Tenant Ap | 0 | 21 | Mar 12 |