libmongoc driver failOver machenism not working when whole VM stopped

I have a ChatScript C++ application using the libmongoc driver to interact with a MongoDB database. The application is deployed on Kubernetes with 4 forks. Our MongoDB setup includes a replica set consisting of 1 primary and 3 secondary nodes.

Recently, we encountered an issue where one of the MongoDB VMs unexpectedly stopped. When this happens, the ChatScript application begins restarting every 5 minutes. However, if the MongoDB service is stopped on the VM instead of the VM itself, the application remains stable without restarts.

This leads us to suspect that the libmongoc failover mechanism behaves differently when a VM stops versus when the MongoDB service stops. We suspect this might be related to how the application handles initial connection states or unexpected interruptions at the VM level.

Can anyone help us understand why the application restarts when a MongoDB VM stops? Is there a specific configuration or behavior in libmongoc that we might need to address to resolve this?

Have you taken a look at the logs for the application and your Kubernetes deployment?
The app restart is likely triggered by the restart policy for your Kubernetes deployment.