I have deployed mongodb on my k8s cluster as external
so when i do rs.status() it shows me my replicaset as the following
id: 0 host: 162.28.1.80:30001
id: 1 host: 162.28.1.80:30002
id: 2 host: 162.28.1.82:30003
it resolves the host to take the dedicated node ip which the pod was created on and the port as seen from the service type node port.
now i need to access this service from outside the k8s cluster
so i have to open node through load balancer f5 which are 55243 55244 55245 and they should rout to 30001 30002 30003
and the host i connect to (mongodb-host) has the vip 172.28.0.48 which should rout to all nodes ip that seen above in replica set configuration.
now when i try to connect lets say like this
mongodb://root:*****@mongodb-host:55245,mongodb-host:55246,mongodb-host:55246/?replicaSet=rs0
it doesn’t connect at all, unless i change the replicaset hosts configurations to be
id: 0 host: mongodb-host:55243
id: 1 host: mongodb-host:55244
id: 2 host: mongodb-host:55245
what can i do to solve this issue ?
i dont want the replica internally to rely on external host as it not ideal solution to rely on that connection, once something happen in external connection my mongodb instance goes down and i cant no longer access the replicaset to troubleshoot