I’ve got a three node cluster setup running on kubernetes with each node having a ~1TB Drive. I’ve created a Persistent Volume Claim for Mongodb of 100 GB.
When I get the actual oplog size from rs.printReplicationInfo() it’s reporting 45011.683349609375 MB
When I run db.oplog.rs.dataSize() it’s reporting 469617794
Should I pass in the size in the conf file to make it 5% of the size of the Persistent Volume Claim as it seems it’s reading my system drive and in the process taking up 50% of the space in my Persistent Volume Claim?