As title suggests, I am using docker-compose running 3 mongo containers. I have attached the rs.status() log and also a screenshot of Studio3t to scan my ports for the members.
Have you set your IP addresses correctly in your bindIpnetwork configuration? Did you include in there the IP address of your client in your 3 config files?
If that’s not it, could you please share your config file and your docker-compose.yml maybe so we have a bit more information to work with?
Also ─ just to confirm ─ “mongo-rs0-1”, “mongo-rs0-2” and “mongo-rs0-3” are 3 different physical servers, correct?
I do not believe I have done much in the network configuration. Feel free to suggest ways I can improve my docker-compose.yml file! It’s a boilerplate mongo-rs docker container I found.
From my understanding, docker-compose is a tool that starts multiple containers that will work together on the same machine.
So the way I understand it, your 3 “mongo-rs0-X” containers will be started on the same machine which makes me want to ask a simple question:
Replica Sets are here for one main reason: High Availability in prod. If your 3 nodes depends on some piece of hardware they have in common (same power source, same disk bay, etc), it means you are not really HA because that piece of equipment can fail and bring all your nodes down, all at once. Which is a big NO NO in prod.
That’s the reason why it’s a good practice to deploy your nodes in different data centers.
I also see you are using --smallfiles which is a deprecated option which was only for MMapV1 which is gone now and --oplogSize 128 is definitely a terrible idea.
So ─ based on this ─ I think you are trying to deploy a development or test environment here but then I really don’t see the point of deploying 3 nodes on the same cluster. A single node replica set would most probably be good enough, no?
Here is the docker command I use to start an ephemeral single replica set node on my machine when I need to hack something:
" It’s a nonsense to run 3 members of the same RS on the same machine. Running multiple data bearing mongod on the same machine shouldn’t exist."
How are we supposed to test transactions locally? We are already using docker compose for our local setup so not having this functionatliy would make it impossible to test Mongo Transactions. How are you testing your transactions? Are you using them? PS for the record I think this is the only reason you would want to setup replicas locally, or to mirror your hosted env for testing or for educational purposes. This feature does belong in mongod however.
The but why meme I almost find offensive because I am here ONLY because my team needs to test mongo transactions and it was YOUR TEAM that implemented in a way where they can only be tested with this replica configuration. So to come here, having spent my morning trying to get this to work and see that you meme the OP for doing this when you created the problem is
I’m sorry that you found that a bit offensive. I’m being a bit sarcastic to REALLY explain why it doesn’t make sense and get the point across. If you read my entire post, the answer and justification is in it.
I explained in my answer why it’s a bad idea and I also explained the solution: Single Node Replica Set.
Transactions, Change Streams and a few other features in MongoDB rely on the special oplog collection that only exists in Replica Set setups. BUT you can set up a Single Node Replica Set that only contains a single Primary node and all the features will work just as good as in a 7 nodes Replica Set.
So again, I reiterate:
It’s a nonsense to run 3 members of the same RS on the same machine.
Use a Single Node RS instead. Same features but it’s using 3X less ressources.
Since the original question basically is “How can I access a replica set inside a docker network from the host”, this solution might be useful:
Create a public DNS entry to a “localhost” host on domain you own, for example localhost.somedomain.com → 127.0.0.1
Use this single host in the replica set config as name (use only one mongo in the set)
In your docker setup, override the DNS with the docker host running the replica. In docker-compose, the network aliases section can be used for this
This will result in name resolve allowing access to mongo via localhost.somedomain.com both from inside docker and from the host (given the right exposed port)
Note that the usual solution to this problem is “modify your local /etc/hosts file”, which works too, but requires every dev/system in your organization to modify system files.
So in order to have a docker instance of a replica set MongoDB I need to modify the local etc/hosts file?
How has that behavior ever rolled to a production version?
What about environments I do not have access to the system configuration (eg. CI/CD pipelines)?
@MaBeuLux88_xxx are you really a mongo employee? this is for testing purposes.
@TheAdrianReza_N_A I believe it has something to do with replicaset config, you can try using bitnami images and try with the MONGODB_ADVERTISED_HOSTNAME as localhost, if my own tests are successful will post back.
Sorry for the half response, but had to chime in to this nonsense with Maxime Beugnet!
@Marco_Maldonado, Maxime is actually correct. You don’t need a multinode cluster for testing or doing things in MongoDB.
I don’t really agree with the “nonsense” comment, but I do understand the sentiment because unless your test environment is intended to directly test and evaluate performance in an actual production environment to actually have full awareness of what impact things will have on an environment level, there really isn’t as much of a need for a multinode replica set.