Hey

I want to create a replica set model that look like this:

requirements for this design:

  1. Data needs to be copied automatically between pc’s to local storage.
  2. Each pc should be able to work as stand alone.
  3. PC 1 does all the writes
  4. PC 2/3 has to be Read-Only
  5. Each PC only knows its own IP

I have the following configuration:

rs.initiate({_id: “ReplTest”, members:[{_id:0,host:“1.1.1.1:27017”,priority:10},{_id:1,host:“1.1.1.1:27018”,arbiterOnly: true},{_id:2,host:“2.2.2.2:27017”,priority:1},{_id:3,host:“2.2.2.2:27018”,arbiterOnly: true},{_id:4,host:“3.3.3.3:27017”,priority:1},{_id:5,host:“3.3.3.3:27018”,arbiterOnly: true}]})

db.adminCommand({“setDefaultRWConcern”: 1,“defaultWriteConcern”: { “w”: 2 },“defaultReadConcern”: { “level”: “majority” }})`n

The questions:

  1. I have created this setup and tested using MongoDB Compass, I have connected from PC 2 with the connection string : “mongodb://2.2.2.2:27017/?replicaSet=ReplTest” (notice it is his own ip), this instance is a secondary and some how managed to write to the collection, why? (I thought you can only write using the primary).
  2. If you have a better system design to suggest I would love to hear.
  3. In a situation where PC 1 (the primary) falls and an election happens, is it possible to keep it read-only? (meaning inly PC 1 can write) .
  4. Is it possible to know if a write op has replicated to other nodes? I saw there is a time stamp but an ID for this op would be better…
    In case PC 1 has done a write op and PC 2 wasn’t connected.

Thank you!

1 Like

Hello, welcome to the MongoDB community.

  1. This has happened to me, I believe it is something with Compass, because when you enter mongosh from within Compass, it directs you to the primary node. Confirm if this has happened to you.

  2. In database environments, the ideal is to always have odd nodes, in this case you have 3 machines with 6 services and that is not a good thing. You can evaluate a PSS (primary-secondary-secondary) or PSA (primary-secondary-arbiter) architecture

  3. You can keep the nodes as read, where only PC1 receives writing, but this means you cannot guarantee complete high availability. You can do this by removing votes from other nodes. (I do not recommend)

  4. By default, all transactions in MongoDB have a majority write concern to ensure that the writing was done on more than one node. If any node goes down, when it returns to the cluster, it will receive the missing data for it, if you still have the oplogs. If not, you will need to do a full sync

If this is something for a productive environment, I recommend reading production notes

Thank you for the reply,

  1. I will definitely test this without using compass hoping the writing is denied.

  2. I do understand that there are best practice rules for typical database environments, but this is not the case, this database is for a closed network without many falls or traffic, and the idea is to keep it simple.

  3. How do I keep the nodes as read-only and promise only PC1 receives writing? I know this is problematic for the usual cases but not here. also if PC1 falls and an election occurs can i promise that the writing wont happen? (as PC1 is down…)

  4. Is there maybe an event to subscribe notifying when and which node has done writing?

When you connect with replicaSet=ReplTest you tell the driver that you want to connect to the replica set, not to the individual node. As such the first thing accomplished is to retrieve the replica set configuration and then connect to the primary. If you want to connect to an individual node you need to remove the replicaSet parameter from your connection string.

All nodes are read-only except the primary. With config you are may be able to prevent all nodes except PC1 to become primary.

I have removed, “mongodb://2.2.2.2:27017/” , yet the write operation still came thru… and I don’t understand why (I have connected with MongoDB Compass to a secondary without the replicaSet parameter and succeeded writing )

  1. You can configure PC2 and PC3 with the following configuration:

Swap the order of replicas correctly

cfg = rs.conf()
cfg.members[1].priority = 0
cfg.members[2].priority = 0
rs.reconfig(cfg)

In theory, this should be enough. Doing it this way ensures that secondaries cannot become primaries. In your architecture, you don’t need the referee, you don’t need it.

  1. It doesn’t have an event or subscription to know when and which node wrote. The primary node is the only one that can receive writes and it replicates to the secondary nodes. What you can do is read the Change Streams to know the data that is being written and do some logic with that or plug in a CDC.

Let me know if you have any further questions

2 Likes

It looks like @Samuel_84194 understood something in your use-case that I did not:

You were able to write to 2.2.2.2:27017 without replicaSet parameter because it had became the PRIMARY.

1 Like

Right now the priority is as follows:

cfg.members[0].priority = 10 // PC1
cfg.members[1].priority = 1  // PC2
cfg.members[2].priority = 1  // PC3

As I understand PC1 is the primary whenever he’s up.

The problem I see making PC2 and PC3 with priority 0 is, if PC1 falls a new election will fail and I won’t have access to read from PC2 or PC3.

Where can i read and learn more about this?

Thank you!

1.1.1.1 has a priority of 10 and 2.2.2.2 has a priority of 1, therefore I don’t think it became a primary.
Both of them were up and before the writing I saw that 1.1.1.1 was the primary.

Priority 0 ensures that members with this level are unable to become primaries. In this case your cluster would be readonly during the PC1 crash and from what I understand that is what you want.

1 Like