I have deployed a MongoDB replica set using a StatefulSet in Kubernetes, with persistent volumes (PVs) attached to an NFS server. My cluster consists of 3 master nodes and 3 worker nodes. When I shut down worker1, the MongoDB pods running on that node remain in a terminating state and are not being rescheduled to another available worker node. Can someone help identify the issue and suggest how to resolve it?
Skip to main content
It seems this issue is related to Kubernetes scheduling and terminating Pods rather than the Operator itself. Your workloads will not be rescheduled when Pods are in Terminating state.
I suggest checking Events associated with a Pod and verifying that’s keeping them in Terminating state. This is very likely the root of the problem you’re facing. Alternatively you can delete a Pod with “–force” option but I’m not sure how your NFS and Persistent Volumes will behave.
New & Unread Topics
Topic | Replies | Views | Activity |
---|---|---|---|
Change Email Address of App User | 0 | 252 | May 2024 |
Response of updateMany() | 2 | 470 | Jun 2024 |
I want to make a script to create a new database and a new user with a specific user role | 1 | 119 | Sep 2024 |
Github student pack benefits not working for mongodb | 0 | 22 | Nov 2024 |
Connection to Sharded Cluster via SSH Tunnel with Mongo DB Driver | 0 | 67 | Dec 2024 |