I have deployed a MongoDB replica set using a StatefulSet in Kubernetes, with persistent volumes (PVs) attached to an NFS server. My cluster consists of 3 master nodes and 3 worker nodes. When I shut down worker1, the MongoDB pods running on that node remain in a terminating state and are not being rescheduled to another available worker node. Can someone help identify the issue and suggest how to resolve it?
Skip to main content
It seems this issue is related to Kubernetes scheduling and terminating Pods rather than the Operator itself. Your workloads will not be rescheduled when Pods are in Terminating state.
I suggest checking Events associated with a Pod and verifying that’s keeping them in Terminating state. This is very likely the root of the problem you’re facing. Alternatively you can delete a Pod with “–force” option but I’m not sure how your NFS and Persistent Volumes will behave.
New & Unread Topics
Topic | Replies | Views | Activity |
---|---|---|---|
Restriction of Mongodb Connections at Network Access Level | 4 | 1.1k | May 2024 |
Text index creation failure with unsupported language | 0 | 260 | Jul 2024 |
No results returned from agglomerate(pipeline) call on collection | 1 | 56 | Nov 2024 |
Can’t Sign in to MongoDB Atlas | 1 | 62 | Nov 2024 |
Server selection times out sometimes but not always | 3 | 62 | Jan 29 |