5 / 5
Jul 2024

I have created the db folder for three replica set i.e db, db1 and db2 and replSet name is rs0.

i have executed below steps in three different sessions
mongod --port 27017 --replSet rs0 --dbpath “C:\data\db” --bind_ip localhost
mongod --port 27018 --replSet rs0 --dbpath “C:\data\db1” --bind_ip localhost
mongod --port 27019 --replSet rs0 --dbpath “C:\data\db2” --bind_ip localhost

and login mongosh --port 27017 localhost in 4th session

till above steps i am able to work with .js file without any issue once i close all this sessions and again login

with only mongosh --port 27017 localhost in 1 session by default my replica set is going in secondary node please refer below:

rs0 [direct: secondary] localhost>

i have added the rs.status() output in below: how to make the secondary to primary also i am not able to connect the mongo Compass due to this issue ?

rs0 [direct: secondary] localhost> rs.status()

{

set: ‘rs0’,

date: ISODate(‘2024-06-13T09:44:02.494Z’),

myState: 2,

term: Long(‘6’),

syncSourceHost: ‘’,

syncSourceId: -1,

heartbeatIntervalMillis: Long(‘2000’),

majorityVoteCount: 2,

writeMajorityCount: 2,

votingMembersCount: 3,

writableVotingMembersCount: 3,

optimes: {

lastCommittedOpTime: { ts: Timestamp({ t: 1718268794, i: 1 }), t: Long('6') }, lastCommittedWallTime: ISODate('2024-06-13T08:53:14.587Z'), readConcernMajorityOpTime: { ts: Timestamp({ t: 1718268794, i: 1 }), t: Long('6') }, appliedOpTime: { ts: Timestamp({ t: 1718268814, i: 1 }), t: Long('6') }, durableOpTime: { ts: Timestamp({ t: 1718268814, i: 1 }), t: Long('6') }, lastAppliedWallTime: ISODate('2024-06-13T08:53:34.589Z'), lastDurableWallTime: ISODate('2024-06-13T08:53:34.589Z')

},

lastStableRecoveryTimestamp: Timestamp({ t: 1718268794, i: 1 }),

electionParticipantMetrics: {

votedForCandidate: false, electionTerm: Long('4'), lastVoteDate: ISODate('2024-06-13T08:05:19.407Z'), electionCandidateMemberId: 2, voteReason: 'already voted for another candidate (localhost:27018) this term (4)', lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1718121170, i: 1 }), t: Long('3') }, maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1718121170, i: 1 }), t: Long('3') }, priorityAtElection: 1

},

members: [

{ _id: 0, name: 'localhost:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 515597, optime: { ts: Timestamp({ t: 1718268814, i: 1 }), t: Long('6') }, optimeDate: ISODate('2024-06-13T08:53:34.000Z'), lastAppliedWallTime: ISODate('2024-06-13T08:53:34.589Z'), lastDurableWallTime: ISODate('2024-06-13T08:53:34.589Z'), syncSourceHost: '', syncSourceId: -1, infoMessage: '', configVersion: 4, configTerm: 6, self: true, lastHeartbeatMessage: '' }, { _id: 1, name: 'localhost:27018', health: 0, state: 8, stateStr: '(not reachable/healthy)', uptime: 0, optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') }, optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') }, optimeDate: ISODate('1970-01-01T00:00:00.000Z'), optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'), lastAppliedWallTime: ISODate('2024-06-13T08:53:04.587Z'), lastDurableWallTime: ISODate('2024-06-13T08:53:04.587Z'), lastHeartbeat: ISODate('2024-06-13T09:43:59.153Z'), lastHeartbeatRecv: ISODate('2024-06-13T08:53:13.646Z'), pingMs: Long('0'), lastHeartbeatMessage: 'Error connecting to localhost:27018 (127.0.0.1:27018) :: caused by :: onInvoke :: caused by :: No connection could be made because the target machine actively refused it.', syncSourceHost: '', syncSourceId: -1, infoMessage: '', configVersion: 4, configTerm: 6 }, { _id: 2, name: 'localhost:27019', health: 0, state: 8, stateStr: '(not reachable/healthy)', uptime: 0, optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') }, optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') }, optimeDate: ISODate('1970-01-01T00:00:00.000Z'), optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'), lastAppliedWallTime: ISODate('2024-06-13T08:53:24.588Z'), lastDurableWallTime: ISODate('2024-06-13T08:53:14.587Z'), lastHeartbeat: ISODate('2024-06-13T09:43:59.574Z'), lastHeartbeatRecv: ISODate('2024-06-13T08:53:33.368Z'), pingMs: Long('0'), lastHeartbeatMessage: 'Error connecting to localhost:27019 (127.0.0.1:27019) :: caused by :: onInvoke :: caused by :: No connection could be made because the target machine actively refused it.', syncSourceHost: '', syncSourceId: -1, infoMessage: '', configVersion: 4, configTerm: 6 }

],

ok: 1,

‘$clusterTime’: {

clusterTime: Timestamp({ t: 1718268814, i: 1 }), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') }

},

operationTime: Timestamp({ t: 1718268814, i: 1 })

}


could you please help me resolved this issue also is ther any way to start the replica set in single sessions ?

TIA.

Regards,

Pawan

Your rs.status() shows other 2 nodes are down(not healthy unbale to connect message)
You should have majority nodes up to connect to primary
What is your os?
Why the sessions were closed.When you close the sessions where mongod is running it gets terminated unless you are running it in background mode using fork
If your os is Windows you have to leave those sessions
Check your mongods again

Thank you for the details Ramachandra.
is there anyway to keep this session permanent running on windows machine
After my laptop get restart again I have to start the 3 sessions and start working ?

There is no fork option in Windows
Yes every time you need to start your mongods after reboot
To avoid this you can start mongod as service which automatically comes up after reboot
You need to configure 3 services if you are using the same machine for your replica

13 days later