Hey

I want to create a replica set model that look like this:

PC 1 : ip=1.1.1.1 - The only one that writes
PC 2 : ip=2.2.2.2 - Read only
PC 3 : ip=3.3.3.3 - Read only

Use-case: On PC 1 I have a program that writes to the DB, on PC 2 and 3 I have a program that reads that data, I want to be able to access the data from each PC even if there is no connection.
The solution I thought of is - creating a replica-set, that way data is replicated to each pc locally, the challenge is how to configure the replica set :slight_smile:

  1. Only pc 1 does the writing - meaning this is the primary
  2. If connection between the PC’s falls each PC should be able to work as stand alone, as far as I understand the solution for that is nominating that PC as primary therefore an arbiter is needed (you need a group of minimum 2 to do an election) - based on this requirement, each pc runs a DB instance and an arbiter.

And this should look something like this:

members:
[{_id:0,host:“1.1.1.1:27017”,priority:10},{_id:1,host:“1.1.1.1:27018”,arbiterOnly: true},{_id:2,host:“2.2.2.2:27017”,priority:1},{_id:3,host:“2.2.2.2:27018”,arbiterOnly: true},{_id:4,host:“3.3.3.3:27017”,priority:1},{_id:5,host:“3.3.3.3:27018”,arbiterOnly: true}]

I have tested this scenario and closed PC1 and PC2 therefore PC3 has left alone; on PC3 the DB instance and arbiter are running, for some reason the instance on PC3 won’t become primary.
I would love knowing why.

Maybe this is the problem?

{“setDefaultRWConcern”: 1,“defaultWriteConcern”: { “w”: 2 },“defaultReadConcern”: { “level”: “majority” }}

How is this different from Replica set system modeling and read-write?

Here I am facing a different problem, the design should have solved my “stand alone” requirement but for some reason it wont work

The topic you mentioned speaks about read-write permissions, I did ask about the system design but no suggestions made so I thought to split this question for another topic

While this is an unusual use case, it is still valid. The question is whether Replica Set could be applied in this case and I hope it could as data replication for data availability still holds in this case.

For me, there are two aspects that should be addressed:

  1. Automatic failover

  2. Local Data availability on the isolated secondary

  3. If I recall it right, MongoDB needs a majority (the floor of 50% + 1 nodes from the replica set). If you setup 3 servers with 2 instances on each (data bearing + arbiter) it becomes 6. So isolating one server would result in availability of 2/6 which is less than required number of nodes (majority).

  • Maybe it is possible to configure Replica Set with fewer available nodes to combat this issue.
  • I would try to test the following configuration:
    PC1: primary data bearing instance
    PC2: secondary data bearing + 2 arbiters
    PC3: secondary data bearing
    Then try to disconnect PC2 and see whether it becomes primary.
  1. As the secondary on a server with failed network becomes isolated, it still has all the data. If isolation is an exception situation (I hope it is), then the following could be done:
  • force the secondary to become primary, reconfigure replica set: drop the disconnected nodes from replica set. This step however is not automatic, but probably could be scripted to make it easier
  • try to use Read Preference Mode
  • try to use directConnection option in the connection string in your SW running on server holding the secondary

In this community there are some very knowledgeable guys who will suggest you better options, I’m not a seasoned DBA in MDB, however do have some experience with other engines.