6 / 6
Dec 2024

I created new sharded cluster v7.0.12 with one shard and config replica.

I try to implement file system snapshot backup but get stuck with lock unlock command at mongos context.

fsyncLock() work fine as expected.

[direct: mongos] admin> db.getSiblingDB("admin").fsyncLock() { numFiles: 1, all: { raw: { 'rs1/backup-poc-rs1-0-fsn1:27017,backup-poc-rs1-1-nbg1:27017,backup-poc-rs1-2-hel1:27017': { info: 'now locked against writes, use db.fsyncUnlock() to unlock', lockCount: Long('1'), seeAlso: 'http://dochub.mongodb.org/core/fsynccommand', ok: 1 }, 'cfg/backup-poc-cfg-0-fsn1:27017,backup-poc-cfg-1-nbg1:27017,backup-poc-cfg-2-hel1:27017': { info: 'now locked against writes, use db.fsyncUnlock() to unlock', lockCount: Long('1'), seeAlso: 'http://dochub.mongodb.org/core/fsynccommand', ok: 1 } } }, ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1723557836, i: 1 }), signature: { hash: Binary.createFromBase64('DIr2nxYdYZypvYVSU6P4ULm+4dA=', 0), keyId: Long('7402590685452304406') } }, operationTime: Timestamp({ t: 1723557836, i: 1 }) }

But db.getSibling(“admin”).fsyncUnlock() did not permited.

[direct: mongos] admin> db.fsyncUnlock() MongoServerError[Unauthorized]: not authorized on admin to execute command { fsyncUnlock: 1, lsid: { id: UUID("062761fa-34ae-4723-a93b-1bb9a97cbcca") }, $clusterTime: { clusterTime: Timestamp(1723557836, 1), signature: { hash: BinData(0, 0C8AF69F161D619CA9BD855253A3F850B9BEE1D0), keyId: 7402590685452304406 } }, $db: "admin" }

User that run both command have next roles
- “clusterAdmin”
- “dbAdminAnyDatabase”
- “root”
- “hostManager”
that should be enough according manual https://www.mongodb.com/docs/manual/reference/built-in-roles/#mongodb-authrole-hostManager

How can be solve this puzzle?

Best regards
Roman Sereda

Yes. I check this first and set of real permission as second.

db.fsyncLock() command allowed in same user session

Hi @Roman_Sereda1,
I just did a test and personally it works correctly for me:

[direct: mongos] test> db.getSiblingDB("admin").fsyncLock() { numFiles: 1, all: { raw: { 'myShard_0/replicadue.mongodb.int:27000,replicatre.mongodb.int:27000,replicauno.mongodb.int:27000': { info: 'now locked against writes, use db.fsyncUnlock() to unlock', lockCount: Long('1'), seeAlso: 'http://dochub.mongodb.org/core/fsynccommand', ok: 1 }, 'myShard_1/replicadue.mongodb.int:27001,replicatre.mongodb.int:27001,replicauno.mongodb.int:27001': { info: 'now locked against writes, use db.fsyncUnlock() to unlock', lockCount: Long('1'), seeAlso: 'http://dochub.mongodb.org/core/fsynccommand', ok: 1 }, 'configRS/replicadue.mongodb.int:27020,replicatre.mongodb.int:27020,replicauno.mongodb.int:27020': { info: 'now locked against writes, use db.fsyncUnlock() to unlock', lockCount: Long('1'), seeAlso: 'http://dochub.mongodb.org/core/fsynccommand', ok: 1 } } }, ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1722613341, i: 7 }), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') } }, operationTime: Timestamp({ t: 1722613341, i: 7 }) }
[direct: mongos] test> db.getSiblingDB("admin").fsyncUnlock() { raw: { 'configRS/replicadue.mongodb.int:27020,replicatre.mongodb.int:27020,replicauno.mongodb.int:27020': { info: 'fsyncUnlock completed', lockCount: Long('0'), ok: 1 }, 'myShard_1/replicadue.mongodb.int:27001,replicatre.mongodb.int:27001,replicauno.mongodb.int:27001': { info: 'fsyncUnlock completed', lockCount: Long('0'), ok: 1 }, 'myShard_0/replicadue.mongodb.int:27000,replicatre.mongodb.int:27000,replicauno.mongodb.int:27000': { info: 'fsyncUnlock completed', lockCount: Long('0'), ok: 1 } }, ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1722613341, i: 16 }), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') } }, operationTime: Timestamp({ t: 1722613341, i: 16 }) }
4 months later

I did not catch root of problem. I avoid this operation and use network restriction to manage income session to prevent write operation at maintenance event time. Problem is reproducible at my environment.