What happens if a 2 node replica set lose its primary?

Hi team, I have read about mongodb document about election.
I see in a replica set at least 3 node is required to make sure majority can be reached. So as I understand, if a 2 node replica set loses its primary, the replica set is dead.
Because you need both nodes up and running to reach majority and complete an election.

I have been using a 2 node replica set because I didn’t know about the 3-node-at-least rule. And I found that when the primary of a 2 node replica set is dead, the secondary can still complete the election and promote it self to primary.

This is really confusing because I don’t think this is the expected behaviour.

The setup of my 2 node replica set is as below(I got this from mongo-shard-0-0 and do rs.status()):

{
“set” : “mongo-shard-rs-0”,
“date” : ISODate(“2024-10-16T05:41:27.347Z”),
“myState” : 2,
“term” : NumberLong(46),
“syncSourceHost” : “mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,
“syncSourceId” : 1,
“heartbeatIntervalMillis” : NumberLong(2000),
“majorityVoteCount” : 2,
“writeMajorityCount” : 2,
“votingMembersCount” : 2,
“writableVotingMembersCount” : 2,
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1729057285, 1),
“t” : NumberLong(46)
},
“lastCommittedWallTime” : ISODate(“2024-10-16T05:41:25.147Z”),
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1729057285, 1),
“t” : NumberLong(46)
},
“appliedOpTime” : {
“ts” : Timestamp(1729057285, 1),
“t” : NumberLong(46)
},
“durableOpTime” : {
“ts” : Timestamp(1729057285, 1),
“t” : NumberLong(46)
},
“lastAppliedWallTime” : ISODate(“2024-10-16T05:41:25.147Z”),
“lastDurableWallTime” : ISODate(“2024-10-16T05:41:25.147Z”)
},
“lastStableRecoveryTimestamp” : Timestamp(1729057275, 1),
“electionParticipantMetrics” : {
“votedForCandidate” : true,
“electionTerm” : NumberLong(46),
“lastVoteDate” : ISODate(“2024-10-16T05:40:25.036Z”),
“electionCandidateMemberId” : 1,
“voteReason” : “”,
“lastAppliedOpTimeAtElection” : {
“ts” : Timestamp(1729057182, 1),
“t” : NumberLong(44)
},
“maxAppliedOpTimeInSet” : {
“ts” : Timestamp(1729057182, 1),
“t” : NumberLong(44)
},
“priorityAtElection” : 1,
“newTermStartDate” : ISODate(“2024-10-16T05:40:25.060Z”),
“newTermAppliedDate” : ISODate(“2024-10-16T05:40:27.177Z”)
},
“members” : [
{
“_id” : 0,
“name” : “mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 77,
“optime” : {
“ts” : Timestamp(1729057285, 1),
“t” : NumberLong(46)
},
“optimeDate” : ISODate(“2024-10-16T05:41:25Z”),
“lastAppliedWallTime” : ISODate(“2024-10-16T05:41:25.147Z”),
“lastDurableWallTime” : ISODate(“2024-10-16T05:41:25.147Z”),
“syncSourceHost” : “mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,
“syncSourceId” : 1,
“infoMessage” : “”,
“configVersion” : 1153530,
“configTerm” : 46,
“self” : true,
“lastHeartbeatMessage” : “”
},
{
“_id” : 1,
“name” : “mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 69,
“optime” : {
“ts” : Timestamp(1729057285, 1),
“t” : NumberLong(46)
},
“optimeDurable” : {
“ts” : Timestamp(1729057285, 1),
“t” : NumberLong(46)
},
“optimeDate” : ISODate(“2024-10-16T05:41:25Z”),
“optimeDurableDate” : ISODate(“2024-10-16T05:41:25Z”),
“lastAppliedWallTime” : ISODate(“2024-10-16T05:41:25.147Z”),
“lastDurableWallTime” : ISODate(“2024-10-16T05:41:25.147Z”),
“lastHeartbeat” : ISODate(“2024-10-16T05:41:25.570Z”),
“lastHeartbeatRecv” : ISODate(“2024-10-16T05:41:27.060Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “”,
“electionTime” : Timestamp(1729057225, 1),
“electionDate” : ISODate(“2024-10-16T05:40:25Z”),
“configVersion” : 1153530,
“configTerm” : 46
}
],
“ok” : 1,
“$gleStats” : {
“lastOpTime” : Timestamp(0, 0),
“electionId” : ObjectId(“000000000000000000000000”)
},
“lastCommittedOpTime” : Timestamp(1729057285, 1),
“$configServerState” : {
“opTime” : {
“ts” : Timestamp(1729057284, 1),
“t” : NumberLong(-1)
}
},
“$clusterTime” : {
“clusterTime” : Timestamp(1729057285, 1),
“signature” : {
“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),
“keyId” : NumberLong(0)
}
},
“operationTime” : Timestamp(1729057285, 1)
}

When I shut down mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017, then go to mongo-shard-0-0 and do rs.status(), I got this:

{
“set” : “mongo-shard-rs-0”,
“date” : ISODate(“2024-10-16T05:48:18.841Z”),
“myState” : 1,
“term” : NumberLong(48),
“syncSourceHost” : “”,
“syncSourceId” : -1,
“heartbeatIntervalMillis” : NumberLong(2000),
“majorityVoteCount” : 1,
“writeMajorityCount” : 1,
“votingMembersCount” : 1,
“writableVotingMembersCount” : 1,
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1729057690, 1),
“t” : NumberLong(48)
},
“lastCommittedWallTime” : ISODate(“2024-10-16T05:48:10.632Z”),
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1729057690, 1),
“t” : NumberLong(48)
},
“appliedOpTime” : {
“ts” : Timestamp(1729057690, 1),
“t” : NumberLong(48)
},
“durableOpTime” : {
“ts” : Timestamp(1729057690, 1),
“t” : NumberLong(48)
},
“lastAppliedWallTime” : ISODate(“2024-10-16T05:48:10.632Z”),
“lastDurableWallTime” : ISODate(“2024-10-16T05:48:10.632Z”)
},
“lastStableRecoveryTimestamp” : Timestamp(1729057630, 1),
“electionCandidateMetrics” : {
“lastElectionReason” : “electionTimeout”,
“lastElectionDate” : ISODate(“2024-10-16T05:45:30.617Z”),
“electionTerm” : NumberLong(48),
“lastCommittedOpTimeAtElection” : {
“ts” : Timestamp(1729057495, 1),
“t” : NumberLong(46)
},
“lastSeenOpTimeAtElection” : {
“ts” : Timestamp(1729057495, 1),
“t” : NumberLong(46)
},
“numVotesNeeded” : 1,
“priorityAtElection” : 1,
“electionTimeoutMillis” : NumberLong(10000),
“newTermStartDate” : ISODate(“2024-10-16T05:45:30.628Z”),
“wMajorityWriteAvailabilityDate” : ISODate(“2024-10-16T05:45:30.639Z”)
},
“electionParticipantMetrics” : {
“votedForCandidate” : true,
“electionTerm” : NumberLong(46),
“lastVoteDate” : ISODate(“2024-10-16T05:40:25.036Z”),
“electionCandidateMemberId” : 1,
“voteReason” : “”,
“lastAppliedOpTimeAtElection” : {
“ts” : Timestamp(1729057182, 1),
“t” : NumberLong(44)
},
“maxAppliedOpTimeInSet” : {
“ts” : Timestamp(1729057182, 1),
“t” : NumberLong(44)
},
“priorityAtElection” : 1
},
“members” : [
{
“_id” : 0,
“name” : “mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 488,
“optime” : {
“ts” : Timestamp(1729057690, 1),
“t” : NumberLong(48)
},
“optimeDate” : ISODate(“2024-10-16T05:48:10Z”),
“lastAppliedWallTime” : ISODate(“2024-10-16T05:48:10.632Z”),
“lastDurableWallTime” : ISODate(“2024-10-16T05:48:10.632Z”),
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “”,
“electionTime” : Timestamp(1729057530, 1),
“electionDate” : ISODate(“2024-10-16T05:45:30Z”),
“configVersion” : 1182308,
“configTerm” : -1,
“self” : true,
“lastHeartbeatMessage” : “”
}
],
“ok” : 1,
“$gleStats” : {
“lastOpTime” : Timestamp(0, 0),
“electionId” : ObjectId(“7fffffff0000000000000030”)
},
“lastCommittedOpTime” : Timestamp(1729057690, 1),
“$configServerState” : {
“opTime” : {
“ts” : Timestamp(1729057697, 2),
“t” : NumberLong(-1)
}
},
“$clusterTime” : {
“clusterTime” : Timestamp(1729057697, 2),
“signature” : {
“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),
“keyId” : NumberLong(0)
}
},
“operationTime” : Timestamp(1729057690, 1)
}

The log from mongo-shard-0-0 is

{“t”:{“$date”:“2024-10-16T05:45:29.059+00:00”},“s”:“I”, “c”:“ELECTION”, “id”:4615655, “ctx”:“ReplCoord-0”,“msg”:“Not starting an election, since we are not electable”,“attr”:{“reason”:“Not standing for election because I cannot see a majority (mask 0x1)”}}
{“t”:{“$date”:“2024-10-16T05:45:29.066+00:00”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-0”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”}}}
{“t”:{“$date”:“2024-10-16T05:45:29.454+00:00”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“error”:“HostUnreachable: Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”,“replicaSet”:“mongo-shard-rs-0”,“response”:“{}”}}
{“t”:{“$date”:“2024-10-16T05:45:29.454+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“mongo-shard-rs-0”,“host”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}
{“t”:{“$date”:“2024-10-16T05:45:29.578+00:00”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-0”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”}}}
{“t”:{“$date”:“2024-10-16T05:45:29.954+00:00”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”}}
{“t”:{“$date”:“2024-10-16T05:45:29.958+00:00”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“error”:“HostUnreachable: Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”,“replicaSet”:“mongo-shard-rs-0”,“response”:“{}”}}
{“t”:{“$date”:“2024-10-16T05:45:29.958+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“mongo-shard-rs-0”,“host”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”}}}}
{“t”:{“$date”:“2024-10-16T05:45:30.090+00:00”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-0”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”}}}
{“t”:{“$date”:“2024-10-16T05:45:30.462+00:00”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“error”:“HostUnreachable: Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”,“replicaSet”:“mongo-shard-rs-0”,“response”:“{}”}}
{“t”:{“$date”:“2024-10-16T05:45:30.462+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“mongo-shard-rs-0”,“host”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}
{“t”:{“$date”:“2024-10-16T05:45:30.603+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn105”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:50546”,“client”:“conn105”,“doc”:{“driver”:{“name”:“nodejs”,“version”:“2.2.36”},“os”:{“type”:“Linux”,“name”:“linux”,“architecture”:“x64”,“version”:“5.10.223-212.873.amzn2.x86_64”},“platform”:“Node.js v11.2.0, LE, mongodb-core: 2.1.20”}}}
{“t”:{“$date”:“2024-10-16T05:45:30.603+00:00”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-0”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)”}}}
{“t”:{“$date”:“2024-10-16T05:45:30.606+00:00”},“s”:“I”, “c”:“REPL”, “id”:21352, “ctx”:“conn105”,“msg”:“replSetReconfig admin command received from client”,“attr”:{“newConfig”:{“_id”:“mongo-shard-rs-0”,“version”:1153531,“term”:46,“members”:[{“_id”:0,“host”:“mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017”,“arbiterOnly”:false,“buildIndexes”:true,“hidden”:false,“priority”:1,“tags”:{},“secondaryDelaySecs”:0,“votes”:1}],“protocolVersion”:1,“writeConcernMajorityJournalDefault”:true,“settings”:{“chainingAllowed”:true,“heartbeatIntervalMillis”:2000,“heartbeatTimeoutSecs”:10,“electionTimeoutMillis”:10000,“catchUpTimeoutMillis”:-1,“catchUpTakeoverDelayMillis”:30000,“getLastErrorModes”:{},“getLastErrorDefaults”:{“w”:1,“wtimeout”:0},“replicaSetId”:{“$oid”:“67087acec01a5a6118460985”}}}}}

{“t”:{“$date”:“2024-10-16T05:45:30.617+00:00”},“s”:“I”, “c”:“ELECTION”, “id”:4615652, “ctx”:“conn105”,“msg”:“Starting an election, since we’ve seen no PRIMARY in election timeout period”,“attr”:{“electionTimeoutPeriodMillis”:10000}}
{“t”:{“$date”:“2024-10-16T05:45:30.617+00:00”},“s”:“I”, “c”:“ELECTION”, “id”:21444, “ctx”:“ReplCoord-2”,“msg”:“Dry election run succeeded, running for election”,“attr”:{“newTerm”:48}}
{“t”:{“$date”:“2024-10-16T05:45:30.617+00:00”},“s”:“I”, “c”:“ELECTION”, “id”:6015300, “ctx”:“ReplCoord-0”,“msg”:“Storing last vote document in local storage for my election”,“attr”:{“lastVote”:{“term”:48,“candidateIndex”:0}}}
{“t”:{“$date”:“2024-10-16T05:45:30.618+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:4333213, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM Topology Change”,“attr”:{“replicaSet”:“mongo-shard-rs-0”,“newTopologyDescription”:“{ id: "29f69fc4-0fd2-4370-a26d-19cea3d60bfc", topologyType: "ReplicaSetNoPrimary", servers: { mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017: { address: "mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017", topologyVersion: { processId: ObjectId(‘670f51ba273efe9898d3ec9e’), counter: 5 }, roundTripTime: 216, lastWriteDate: new Date(1729057495000), opTime: { ts: Timestamp(1729057495, 1), t: 46 }, type: "RSSecondary", minWireVersion: 13, maxWireVersion: 13, me: "mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017", setName: "mongo-shard-rs-0", setVersion: 1182308, lastUpdateTime: new Date(1729057530618), logicalSessionTimeoutMinutes: 30, hosts: { 0: "mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017" }, arbiters: {}, passives: {} }, mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: { address: "mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017", type: "Unknown", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: "mongo-shard-rs-0", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId(‘7fffffff000000000000002e’), setVersion: 1153530 } }”,“previousTopologyDescription”:“{ id: "8c458316-1de8-4271-9c9a-0b433d6b230d", topologyType: "ReplicaSetNoPrimary", servers: { mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017: { address: "mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017", topologyVersion: { processId: ObjectId(‘670f51ba273efe9898d3ec9e’), counter: 4 }, roundTripTime: 216, lastWriteDate: new Date(1729057495000), opTime: { ts: Timestamp(1729057495, 1), t: 46 }, type: "RSSecondary", minWireVersion: 13, maxWireVersion: 13, me: "mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017", setName: "mongo-shard-rs-0", setVersion: 1153530, lastUpdateTime: new Date(1729057525318), logicalSessionTimeoutMinutes: 30, hosts: { 0: "mongo-shard-0-0.mongo-shard-0-svc.default.svc.cluster.local:27017", 1: "mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017" }, arbiters: {}, passives: {} }, mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017: { address: "mongo-shard-0-1.mongo-shard-0-svc.default.svc.cluster.local:27017", type: "Unknown", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: "mongo-shard-rs-0", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId(‘7fffffff000000000000002e’), setVersion: 1153530 } }”}}
{“t”:{“$date”:“2024-10-16T05:45:30.627+00:00”},“s”:“I”, “c”:“ELECTION”, “id”:21450, “ctx”:“ReplCoord-0”,“msg”:“Election succeeded, assuming primary role”,“attr”:{“term”:48}}

1 Like

Hi @jia_shizhen ,

Generally, In case of two node failures in three node replica set, the remaining sole node will assume secondary role and will accommodate only reads, moreover writes will fail.

But the above scenario with 2 nodes is quite intresting, based on the output of your rs.status() comand, the node is running as standalone and we can see the absence of secondary node details in below snippet.

The reason for this is removal of unavailable node from replica set which results it to run as standalone server, of the node was not removed from replica set we will see the other node in unreachable state.

This is a link to the original post: What will happen if a 2 node replica set lose its primary? - #7 by steevej.

Yes. Exactly. That is why in the original post, I wrote

and asked

2 Likes

How is it setup is key. Looks like the scaling of the stateful set is also updating the replicaset.

Cordoning the node and deleting the pod is a better way of testing the failure scenario.

1 Like

So @jia_shizhen, did you

Remove the node from the replica set before terminating the pod?

As hinted by @Saipraneeth_Vaddineni and I.

Configure the cluster so that the node is automatically removed from replica set?

Like hinted by @chris.

Please do not leave your 2 threads about this issue dangling. If your issue has been solved while following one of the reply please mark the reply as the solution. If you found the solution elsewhere or by your own please share.

Thank you, I have figured it out:
When I scale down sts, a SIGTERM signal will trigger the node smooth shutdown, so remaining one is running in stand alone mode.

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.