A namespace is stuck in Terminating state because finalizers have not been removed by their responsible controllers. This blocks namespace deletion and often indicates crashed operators or stuck resources.
A Kubernetes namespace stuck in the Terminating state occurs when the namespace deletion process cannot complete because finalizers (metadata keys that guard resource deletion) have not been removed by their responsible controllers. When you delete a namespace, Kubernetes sets its phase to 'Terminating' and attempts to garbage collect all dependent resources and remove finalizers. If any finalizer remains in the namespace's metadata.finalizers field, the namespace will indefinitely remain in Terminating state. Finalizers are protection mechanisms that ensure controllers have time to clean up owned resources before deletion occurs. Common sources include CustomResource controllers, admission webhooks, and storage providers. When a controller dies, crashes, or malfunctions before removing its finalizer, or when a finalizer points to a resource that no longer exists, the namespace enters a permanent limbo state. This is particularly problematic because stuck namespaces consume cluster resources and make it impossible to fully remove abandoned projects or test environments.
Retrieve the namespace definition to see which finalizers are present:
kubectl get namespace <namespace-name> -o json | jq '.metadata.finalizers'Also check the namespace conditions:
kubectl describe namespace <namespace-name>This will show you exactly which finalizers are blocking deletion. Common finalizers include 'kubernetes' (built-in), controller finalizers like 'crd.projectcalico.org/finalizer', and storage finalizers.
Use this comprehensive command to list ALL resources including CustomResources:
kubectl api-resources --verbs=list --namespaced=true -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace-name>Note: 'kubectl get all' does NOT return CustomResources. Pay special attention to:
- PersistentVolumeClaims still bound to pods
- CustomResource instances from installed operators
- Any resources with Status.Conditions showing issues
For each stuck resource, delete it with grace-period=0:
kubectl delete <resource-type> <resource-name> -n <namespace-name> --grace-period=0 --force --wait=falseFor example, if a PVC is stuck:
kubectl delete pvc my-stuck-pvc -n <namespace-name> --grace-period=0 --forceThe --wait=false flag returns immediately. This may leave some cleanup incomplete, so only use after verifying resources are truly orphaned.
If force delete doesn't work, patch the finalizers directly:
kubectl patch <resource-type> <resource-name> -n <namespace-name> -p '{"metadata":{"finalizers":null}}' --type=mergeFor a stuck PVC:
kubectl patch pvc my-pvc -n <namespace-name> -p '{"metadata":{"finalizers":null}}'After patching individual resources, they should delete immediately. Then attempt namespace deletion again:
kubectl delete namespace <namespace-name>If the namespace is still stuck, remove the namespace finalizers directly:
kubectl patch namespace <namespace-name> -p '{"metadata":{"finalizers":null}}'Or use the finalize endpoint directly:
NAMESPACE=<namespace-name>
kubectl get namespace $NAMESPACE -o json | jq 'del(.spec.finalizers)' | kubectl replace --raw "/api/v1/namespaces/$NAMESPACE/finalize" -f -This last method is the most reliable when other kubectl commands fail.
After clearing finalizers, verify the namespace is gone:
kubectl get namespace <namespace-name>You should see 'Error from server (NotFound)'. After successful deletion, check for orphaned resources:
# Check for orphaned PVs
kubectl get pv | grep -i released
# Check for stuck pods cluster-wide
kubectl get pods --all-namespaces | grep TerminatingManually clean up any remaining orphaned resources:
kubectl patch pv <orphaned-pv-name> -p '{"metadata":{"finalizers":null}}'
kubectl delete pv <orphaned-pv-name> --grace-period=0 --forceFinalizers are a critical Kubernetes mechanism. Every resource includes at least the 'kubernetes' finalizer, managed by the namespace lifecycle controller. When you delete a namespace, this controller waits for all child resources to be deleted and their finalizers removed before clearing its own finalizer.
Always investigate why finalizers exist before force-removing them. Understanding the cause helps prevent data loss. Use 'kubectl get <resource> -o yaml | grep -A 10 finalizers' on all resources.
Admission webhooks can block all deletions if misconfigured or pointing to unavailable services. List webhooks with: kubectl get validatingwebhookconfigurations,mutatingwebhookconfigurations and check their failurePolicy.
Third-party operators (Helm, ArgoCD, service mesh controllers) frequently add finalizers. Stuck namespaces often indicate operator pods crashed or the operator CRD was deleted without cleaning up instances. Check operator logs: kubectl logs -n <operator-namespace> deployment/<operator-name>
When dealing with CRD finalizers, deletion order matters. If a parent CRD deletes before children finish cleaning up, children become orphaned. Always delete leaf resources first, then parents.
After force-deleting a namespace with stuck finalizers, always check for orphaned resources (PVs still allocated, unused storage volumes, dangling load balancers). These won't automatically clean up and will continue consuming resources.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm