Pods stuck in Terminating status have finalizers preventing deletion, unresponsive processes, or node communication issues. Remove finalizers or force delete the pod after investigating the cause.
When a pod is stuck in Terminating status, Kubernetes has initiated deletion but the pod cannot complete the termination process. The pod has a deletion timestamp set but remains in the cluster indefinitely. This happens when something prevents the graceful shutdown from completing—whether it's a finalizer waiting for an external condition, a process ignoring termination signals, or the node being unreachable. The pod object persists in the API even though the container may have already stopped.
Inspect the pod for finalizers and status:
kubectl get pod <pod-name> -o yaml | grep -A 5 finalizers
kubectl describe pod <pod-name>Look for:
- finalizers: section in metadata
- Events showing termination progress
- Node status if pod is stuck on a specific node
Also check the node:
kubectl get nodes
kubectl describe node <node-name>Finalizers block deletion until removed. If the finalizer controller isn't running or can't complete:
kubectl patch pod <pod-name> -p '{"metadata":{"finalizers":null}}'Or edit directly:
kubectl edit pod <pod-name>
# Remove the finalizers section, save and exitCaution: Only remove finalizers if you understand what cleanup they were supposed to perform.
If the pod remains stuck after removing finalizers:
kubectl delete pod <pod-name> --grace-period=0 --forceThis tells the API server to remove the pod object immediately without waiting for confirmation from the kubelet.
Warning: Force delete doesn't guarantee the container stopped. The process may still be running on the node. Only use when the node is unreachable or you've verified the container is gone.
If the node is reachable but kubelet isn't responding:
# SSH to the node
ssh <node-ip>
# Check kubelet status
systemctl status kubelet
# View kubelet logs for errors
journalctl -u kubelet -n 100
# Restart if necessary
sudo systemctl restart kubeletAfter kubelet restarts, it will sync state with the API server and complete pending terminations.
Stuck volume unmounts can block termination:
# Check PV/PVC status
kubectl get pv
kubectl get pvc
# Look for volume attachment issues
kubectl describe volumeattachmentFor cloud providers, check if the volume is stuck in detaching state in the cloud console. You may need to force-detach the volume.
On the node:
# Check mounted volumes
mount | grep kubernetes
lsof +D /var/lib/kubelet/pods/<pod-uid>/volumesEnsure applications handle SIGTERM properly:
import signal
import sys
def shutdown(signum, frame):
# Cleanup code
sys.exit(0)
signal.signal(signal.SIGTERM, shutdown)Set appropriate grace periods:
spec:
terminationGracePeriodSeconds: 30 # Adjust based on app needsUse preStop hooks carefully—ensure they complete within the grace period.
For pods on NotReady nodes, Kubernetes waits for the node-monitor-grace-period (default 40s) then pod-eviction-timeout (default 5m) before marking pods for deletion. During this time, pods show Terminating but can't actually terminate until the node recovers or is removed.
To handle node failures faster:
- Reduce pod-eviction-timeout in controller-manager
- Use pod disruption budgets to control eviction behavior
- Configure node problem detector for faster issue detection
Job tracking finalizers (Kubernetes 1.27+) prevent pod deletion until the Job controller records the pod's final status. If the Job controller is unhealthy, these pods remain stuck. Check kube-controller-manager logs for Job-related errors.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes