2 / 2
May 2024

Hello MongoDB Team,

We’re observing duplicate documents in the shards for one of our cluster.
This cluster has 2 shards, when queried from the router, shard1 and shard2 for the same _id - the same document is getting returned from both shards:

mongos> db.myColl.countDocuments({_id: _id}) 1 mongos>
Shard0 Response: shard0:PRIMARY> db.myColl.countDocuments({_id: _id}) 1 shard0:PRIMARY>
Shard1 Response: shard1:PRIMARY> db.myColl.countDocuments({_id: _id}) 1 shard1:PRIMARY>

We performed the CleanupOrphaned multiple times on this cluster, but it did not cleanup the orphaned documents:

Shard0:

db.runCommand( { cleanupOrphaned: "myDB.myColl" } )
shard0:PRIMARY> db.runCommand( { ... cleanupOrphaned: "myDB.myColl" ... } )
2024-05-16T08:42:44.976+0000 I SHARDING [Collection-Range-Deleter] No documents remain to delete in myDB.myColl range [{ _id: MinKey }, { _id: -someid }) 2024-05-16T08:42:44.977+0000 I SHARDING [Collection-Range-Deleter] Waiting for majority replication of local deletions in myDB.myColl range [{ _id: MinKey }, { _id: -someid }) 2024-05-16T08:42:44.977+0000 I SHARDING [Collection-Range-Deleter] Finished deleting documents in myDB.myColl range [{ _id: MinKey }, { _id: -someid })

The cleanup ran successfully on both of the shards but did not perform orphaned documents cleanup successfully.

mongos> db.serverStatus().version 4.2.12 mongos>
Shard0: Shard0:PRIMARY> db.serverStatus().version 4.2.12 Shard0:PRIMARY>
Shard1: Shard1:PRIMARY> db.serverStatus().version 4.2.12 Shard1:PRIMARY>

Hi @Atish_Andhare and welcome to the community forum.

The above seems to be a bit unusual.
Could you help me with some information of the above:

  1. What are steps followed to shard the data.
  2. The Sharding Index being used?
  3. The Shards configuration being used.

Also, since MongoD version 4.4 is old (Reached End of life in April 2023) and hence would not have further updates. There the recommendation would be to follow the documentation on Update to Latest Version and let us know if you are still facing the similar issue.

Best Regards
Aasawari