Billy_Bui
(Billy Bùi)
7
I bumped into this problem years ago. Eventhough we have option not to sync indexes by using { buildIndexes: false } when add member into replica set. But TTL deletion happens in PRIMARY always convert into delete action in oplog then sync to SECONDARYs. This way the whole replica sets always have the same set of data, which is the idea behind the cluster approach.
You can actually test this on your replica set using the following codes in mongosh:
db.getSiblingDB('test').getCollection('abcd').insertOne({ time: ISODate() });
db.getSiblingDB('test').getCollection('abcd').createIndex({ time: 1 }, { expireAfterSeconds: 30 });
db.getSiblingDB('test').getCollection('abcd').find();
// Return 1 doc: [ { _id: ObjectId('671f6dada7485dfd07305aa8'), time: ISODate('2024-10-28T10:55:41.848Z') }
Wait 30 seconds
db.getSiblingDB('test').getCollection('abcd').find();
// Return 0 docs
db.getSiblingDB('local').getCollection('oplog.rs').find({ ns: 'test.abcd' });
You will get these 2 oplogs:
[
{
...
op: 'i',
ns: 'test.abcd',
o: {
_id: ObjectId('671f6dada7485dfd07305aa8'),
time: ISODate('2024-10-28T10:55:41.848Z')
},
o2: { _id: ObjectId('671f6dada7485dfd07305aa8') },
wall: ISODate('2024-10-28T10:55:41.872Z'),
...
},
{
op: 'd',
ns: 'test.abcd',
o: { _id: ObjectId('671f6dada7485dfd07305aa8') },
wall: ISODate('2024-10-28T10:56:12.707Z'),
...
}
]
So sadly I choose to give up and write a little Node.js program to copy my data manually, set time field to custom flag __archiveTime and create TTL on __archiveTime flag to ensure only copied data is deleted.
1 Like