2 / 4
Mar 2022

ubuntu@ip-172-31-47-143 : ~/kubernetes/yamls/mongodb $ kubectl exec -it mongo-0 – bash

root@mongo-0:/# mongo mongo-0.mongo

MongoDB shell version v5.0.6

connecting to: mongodb://mongo-0.mongo:27017/test?compressors=disabled&gssapiServiceName=mongodb

Error: couldn’t connect to server mongo-0.mongo:27017, connection attempt failed: HostNotFound: Could not find address for mongo-0.mongo:27017: SocketException: Host not found (non-authoritative), try again later :

connect@src/mongo/shell/mongo.js:372:17

@(connect):2:6

exception: connect failed

exiting with code 1

Hi @Ghazanfar_Rizvi1
Welcome to the community!!

From the above error, seems like the connection does not work with the url specified.
Can you try configuring the IP and the Port manually to the /etc/hosts/ in the following format:

hostAliases: - hostnames: - "abcd.xyz.com" ip: "10.10.10.10" - hostnames: - "mongodb.example.com" ip: "10.10.10.11"

It would be good if you could share the pod logs or do a kubectl decribe po <podname> to have more information for the same.

Please let us know if you have any further questions.

Thanks
Aasawari

Following is the cat output of /etc/hosts on the master node;

ubuntu@ip-172-31-47-143 : ~/kubernetes/yamls/mongodb $ cat /etc/hosts 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 172.31.47.143 cp

Following is the cat output of /etc/hosts from inside pod shell

**ubuntu@ip-172-31-47-143** : **~/kubernetes/yamls/mongodb** $ kubectl exec -it mongo-0 -- bash root@mongo-0:/# ls -al /etc/hosts -rw-r--r-- 1 root root 244 Mar 10 09:52 /etc/hosts root@mongo-0:/# cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 192.168.15.9 mongo-0.mongo.default.svc.cluster.local mongo-0

Now pls advise which /etc/hosts file shall be modified ?

ubuntu@ip-172-31-47-143:~/kubernetes/yamls/mongodb$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.4 LTS Release: 20.04 Codename: focal ubuntu@ip-172-31-47-143:~/kubernetes/yamls/mongodb$ kubectl version --short Client Version: v1.22.1 Server Version: v1.22.7 ubuntu@ip-172-31-47-143:~/kubernetes/yamls/mongodb$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-40-199 Ready <none> 24h v1.22.1 ip-172-31-47-143 Ready control-plane,master 24h v1.22.1 ubuntu@ip-172-31-47-143:~/kubernetes/yamls/mongodb$ kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""}
ubuntu@ip-172-31-47-143:~/kubernetes/yamls/mongodb$ kubectl get all,pv,pvc NAME READY STATUS RESTARTS AGE pod/mongo-0 1/1 Running 0 14s pod/mongo-1 1/1 Running 0 9s pod/mongo-2 1/1 Running 0 5s pod/nfs-client-provisioner-5c5487cdb8-2hbd9 1/1 Running 0 23h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h service/mongo ClusterIP None <none> 27017/TCP 14s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nfs-client-provisioner 1/1 1 1 23h NAME DESIRED CURRENT READY AGE replicaset.apps/nfs-client-provisioner-5c5487cdb8 1 1 1 23h NAME READY AGE statefulset.apps/mongo 3/3 14s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-9eef099f-b742-4755-afb8-43a2fb7a8223 1Gi RWO Delete Bound default/mongo-volume-mongo-1 nfs-client 9s persistentvolume/pvc-ccf7a63a-1ba6-4cbb-b7cf-479c8c7847d2 1Gi RWO Delete Bound default/mongo-volume-mongo-2 nfs-client 5s persistentvolume/pvc-e5bcf726-6336-41fb-ad12-7b1b1333572f 1Gi RWO Delete Bound default/mongo-volume-mongo-0 nfs-client 14s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/mongo-volume-mongo-0 Bound pvc-e5bcf726-6336-41fb-ad12-7b1b1333572f 1Gi RWO nfs-client 14s persistentvolumeclaim/mongo-volume-mongo-1 Bound pvc-9eef099f-b742-4755-afb8-43a2fb7a8223 1Gi RWO nfs-client 9s persistentvolumeclaim/mongo-volume-mongo-2 Bound pvc-ccf7a63a-1ba6-4cbb-b7cf-479c8c7847d2 1Gi RWO nfs-client 5s
ubuntu@ip-172-31-47-143:~/kubernetes/yamls/mongodb$ kubectl describe pod mongo-0 Name: mongo-0 Namespace: default Priority: 0 Node: ip-172-31-40-199/172.31.40.199 Start Time: Thu, 10 Mar 2022 09:52:30 +0000 Labels: app=mongo controller-revision-hash=mongo-85cdfd8b57 statefulset.kubernetes.io/pod-name=mongo-0 Annotations: cni.projectcalico.org/containerID: 1e0e15d02ed73818d2c153c907045d22e73f75f84ec58d05296a0f4292f81761 cni..projectcalico..org/podIP: 192.168.15.9/32 cni..projectcalico..org/podIPs: 192.168.15.9/32 Status: Running IP: 192.168.15.9 IPs: IP: 192.168.15.9 Controlled By: StatefulSet/mongo Containers: mongo: Container ID: cri-o://1a9bc12dd7df4858147dfdeb5cb7803f8666b10397e1a512df809c905ddcb36a Image: mongo Image ID: docker..io/library/mongo@sha256:03ef0031c1642df26d9d3efa9d57e24929672e1ae7aba5818227752089adde36 Port: 27017/TCP Host Port: 0/TCP Command: mongod --bind_ip_all --replSet rs0 State: Running Started: Thu, 10 Mar 2022 09:52:32 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /data/db from mongo-volume (rw) /var/run/secrets/kubernetes..io/serviceaccount from kube-api-access-8dl25 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: mongo-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mongo-volume-mongo-0 ReadOnly: false kube-api-access-8dl25: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node..kubernetes..io/not-ready:NoExecute op=Exists for 300s node..kubernetes..io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 111s default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. Normal Scheduled 109s default-scheduler Successfully assigned default/mongo-0 to ip-172-31-40-199 Normal Pulling 108s kubelet Pulling image "mongo" Normal Pulled 107s kubelet Successfully pulled image "mongo" in 1.208269152s Normal Created 107s kubelet Created container mongo Normal Started 107s kubelet Started container mongo ubuntu@ip-172-31-47-143:~/kubernetes/yamls/mongodb$

Hi @Ghazanfar_Rizvi1

Just for better understanding, have you been able to connect to the deployment before or is this the first time and you are facing connectivity issues?

Also, could you confirm that can you connect to the database using only mongo without any server address? This would connect to the local server and verify if it is running on the server. You can also try with localhost or 127.0.0.1.
If however, you are able to connect at localhost, then the mentioned IP is not bind with the interface. You may need to enable --bind_ip_all setting on mongod since MongoDB binaries bind only to localhost by default (see security-mongodb-configuration)

You are refer to more documentations here: MongoDB Community Kubernetes Operator that may help with Kubernetes deployment, and also have a look at the related blog post that talks about the operator.

Let us know if more information is needed.

Thanks
Aasawari