While this is an unusual use case, it is still valid. The question is whether Replica Set could be applied in this case and I hope it could as data replication for data availability still holds in this case.

For me, there are two aspects that should be addressed:

  1. Automatic failover

  2. Local Data availability on the isolated secondary

  3. If I recall it right, MongoDB needs a majority (the floor of 50% + 1 nodes from the replica set). If you setup 3 servers with 2 instances on each (data bearing + arbiter) it becomes 6. So isolating one server would result in availability of 2/6 which is less than required number of nodes (majority).

  • Maybe it is possible to configure Replica Set with fewer available nodes to combat this issue.
  • I would try to test the following configuration:
    PC1: primary data bearing instance
    PC2: secondary data bearing + 2 arbiters
    PC3: secondary data bearing
    Then try to disconnect PC2 and see whether it becomes primary.
  1. As the secondary on a server with failed network becomes isolated, it still has all the data. If isolation is an exception situation (I hope it is), then the following could be done:
  • force the secondary to become primary, reconfigure replica set: drop the disconnected nodes from replica set. This step however is not automatic, but probably could be scripted to make it easier
  • try to use Read Preference Mode
  • try to use directConnection option in the connection string in your SW running on server holding the secondary

In this community there are some very knowledgeable guys who will suggest you better options, I’m not a seasoned DBA in MDB, however do have some experience with other engines.