AuthenticationFailed: SCRAM authentication failed, storedKey mismatch

I have installed the operator MongoDB Enterprise Operator in OpenShift 4.9 and I have deployed the Ops Manager and MongoDB enabling TLS and SCRAM authentication, everything was fine until I realized that the user “mms-automation-agent” rotated password constantly which generated the following error on mongodb instances:

“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“mms-automation-agent”,"authenticationDatabase ":“admin”,“remote”:“10.128.2.122:33044”,“extraInfo”:{},“error”:"AuthenticationFailed: SCRAM authentication failed, storedKey mismatch “}}”}

In Ops Manager the processes were shown with a red square indicating “The primary of this replica set is unavailable”

Is there a way to disable automatic password rotation for “mms-automation-agent”?

Or maybe it is a bug?

MongoDB version is 5.0.1-ent

I have a similar error using MongoDB Docker Container with no authentication enabled. Since yesterday I got:

“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“xxx_user”,“authenticationDatabase”:“admin”,“remote”:“172.20.0.7:43540”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}

I am also getting similar error, any solution found. “AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”

I am in first host addin second node with rs.add and log file on second Node

{“t”:{“$date”:“2023-05-18T22:01:54.798-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn81”,“msg”:“client metadata”,“attr”:{“remote”:“10.100.180.11:37288”,“client”:“conn81”,“doc”:{“driver”:{“name”:“NetworkInterfaceTL”,“version”:“4.4.20”},“os”:{“type”:“Linux”,“name”:“PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"”,“architecture”:“x86_64”,“version”:“Kernel 5.10.0-21-amd64”}}}}
{“t”:{“$date”:“2023-05-18T22:01:54.799-03:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn81”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.100.180.11:37288”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}
{“t”:{“$date”:“2023-05-18T22:01:54.801-03:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn81”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.100.180.11:37288”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}
{“t”:{“$date”:“2023-05-18T22:01:54.801-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn81”,“msg”:“Connection ended”,“attr”:{“remote”:“10.100.180.11:37288”,“connectionId”:81,“connectionCount”:0}}
{“t”:{“$date”:“2023-05-18T22:01:55.797-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“10.100.180.11:37294”,“connectionId”:82,“connectionCount”:1}}
{“t”:{“$date”:“2023-05-18T22:01:55.798-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn82”,“msg”:“client metadata”,“attr”:{“remote”:“10.100.180.11:37294”,“client”:“conn82”,“doc”:{“driver”:{“name”:“NetworkInterfaceTL”,“version”:“4.4.20”},“os”:{“type”:“Linux”,“name”:“PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"”,“architecture”:“x86_64”,“version”:“Kernel 5.10.0-21-amd64”}}}}
{“t”:{“$date”:“2023-05-18T22:01:55.799-03:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn82”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.100.180.11:37294”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}

I am using this version because my server processor do not have de processor flag for newer version.

I have installed mongodb 5.0 using mongodb statefulset and fcv is set to 5.0. When I am trying to upgrade it to 6.0, pod is not coming up and it is stuck in bootstrap init container:
kubectl get pods|grep faal
faal-mongodb-0 1/1 Running 0 20h
faal-mongodb-1 1/1 Running 0 20h
faal-mongodb-2 0/1 Init:2/3 0 87m

No error is seen in the log:
kubectl logs -f faal-mongodb-2 -c bootstrap
2023/08/25 05:31:21 Peer list updated
was
now [faal-mongodb-0.faal-mongodb.default.svc.cluster.local faal-mongodb-1.faal-mongodb.default.svc.cluster.local faal-mongodb-2.faal-mongodb.default.svc.cluster.local]
2023/08/25 05:31:21 execing: /work-dir/on-start.sh with stdin: faal-mongodb-0.faal-mongodb.default.svc.cluster.local
faal-mongodb-1.faal-mongodb.default.svc.cluster.local
faal-mongodb-2.faal-mongodb.default.svc.cluster.local

When I exec to the pod, I can see the same authentication error in logs.txt:
{“t”:{“$date”:“2023-08-31T08:29:19.359+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn1118555”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.244.24.36:40500”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}

Does anybody have any clue?

Getting this same error in one of the config VM.

“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”

Does anybody have any solution?

Hi All,

AuthenticationFailed: SCRAM authentication failed, storedKey mismatch

Do we get any solution on these error we are facing , if yes please share.

Thanks

Hey folks!

Okay, so if you’ve switched from a Standalone k8s deployment to a ReplicaSet recently, and getting this error, this one’s for ya!
For each deployment, you’ll get a new root password in the corresponding secret. Though the value of this secret gets updated every time you redeploy your ReplicaSet, the change won’t take any effect, since ReplicaSets are based on k8s statefulsets, thus if the PVC’s Reclaim Policy is “Delete”, it won’t get deleted by default, after you delete your deployment with e.g. helm. Which means your “admin” DB and the storedKey (root-password) within, is going to be the same as it was for the first time/deployment. If you want the changes to take effect, you’ll need to clean up the PVC resource manually. (Maybe the PV and the physical storage as well…)

I think it’s a bug after all. If the root-password is coming from the secret, it should get updated each time I deploy, independently of previous storage states.