2 / 2
May 2022

I’ve got a three node cluster setup running on kubernetes with each node having a ~1TB Drive. I’ve created a Persistent Volume Claim for Mongodb of 100 GB.

When I get the actual oplog size from rs.printReplicationInfo() it’s reporting 45011.683349609375 MB

When I run db.oplog.rs.dataSize() it’s reporting 469617794

Should I pass in the size in the conf file to make it 5% of the size of the Persistent Volume Claim as it seems it’s reading my system drive and in the process taking up 50% of the space in my Persistent Volume Claim?

17 days later

Hi @Tim_Pynegar and welcome to the community forum!!

Do you mind providing more details regarding your setup:

For the above mentioned deployment, is 1 TB shared among all three clusters or is this for all the three nodes.

  • Could you share the deployment/statefulset with persistent volume yaml files for the deployment.
  • MongoDB version for the deployment
  • is the 5% your specific requirement (which perhaps better be set in a config file), or are you discussing about the default MongoDB behaviour?

P.S. here are two things which can be noted for reference:

  1. Oplog can grow beyond their set size when the majority commit point is behind.

  2. The oplog is stored in a database called “local”, and it resides in the dbpath of the instance
    If the dbPath resides in a persistentVolume then the size of the entire database is bound by that volume, including the oplog size.

Thanks
Aasawari