A Kubernetes resource deletion hangs because a finalizer is blocking removal. This occurs when cleanup logic fails, webhooks timeout, or the finalizer controller crashes. Finalizers are safety mechanisms that prevent accidental resource deletion until cleanup is complete.
Finalizers are custom controllers that run cleanup logic before a resource is permanently deleted from etcd. When you delete a resource, Kubernetes sets the `deletionTimestamp` and waits for all finalizers to be removed before actual deletion. If a finalizer fails or hangs, the resource enters a "stuck" state where it appears deleted but isn't actually gone. This is a safety feature—if a finalizer can't be satisfied, the resource owner can intervene. However, stuck finalizers can block applications from redeploying or updating.
List finalizers:
kubectl get <resource-type> <name> -o yaml | grep finalizers -A 10Or more directly:
kubectl get <resource-type> <name> -o jsonpath="{.metadata.finalizers}"This shows you exactly which finalizers are preventing deletion.
Check if the controller that owns the finalizer is healthy:
kubectl get pods -n <namespace> | grep <controller-name>
kubectl describe pod <controller-pod> -n <namespace> # Check for errors
kubectl logs <controller-pod> -n <namespace> --tail=100 # View recent logsIf the controller is down, restart it:
kubectl delete pod <controller-pod> -n <namespace> # Forces restartIf a validating/mutating webhook is involved, check its logs:
kubectl get validatingwebhookconfigurations # List all webhooks
kubectl describe validatingwebhookconfig <name>Find the webhook's pod and check logs:
kubectl logs <webhook-pod> -n <namespace> --tail=100 | grep -i finalizerIf the webhook is timing out, verify it's reachable and responsive.
Some finalizers need dependent resources to complete cleanup. Verify they exist:
# Example: If finalizer references a ConfigMap
kubectl get configmap <name> -n <namespace>If missing, create a dummy resource or remove the finalizer manually (see step 5).
If the finalizer controller is permanently gone, remove the finalizer manually:
kubectl patch <resource-type> <name> -p '{"metadata":{"finalizers":null}}' --type merge
# Or for specific finalizer:
kubectl patch <resource-type> <name> -p '{"metadata":{"finalizers":["other-finalizer"]}}' --type mergeWARNING: Only do this if you understand the cleanup impact. Removing a finalizer bypasses cleanup logic.
Alternatively, use kubectl edit:
kubectl edit <resource-type> <name>
# Remove the finalizer from metadata.finalizers list, save and exitForce deletion after a timeout:
kubectl delete <resource-type> <name> --ignore-finalizers=true
# Or with grace period:
kubectl delete <resource-type> <name> --grace-period=0 --forceThis bypasses finalizers entirely and immediately removes the resource. Use only if the resource is confirmed stuck.
Review the controller or operator managing the finalizer:
# Check operator logs
kubectl logs -n <operator-namespace> -l app=<operator-name> --tail=200
# Check if webhooks are misconfigurations
kubectl get validatingwebhookconfigurations -o yaml | grep -i finalizerCommon fixes:
- Increase webhook timeout: timeoutSeconds: 30 in webhook config
- Add a failure policy: failurePolicy: Ignore if cleanup isn't critical
- Fix controller restart issues by increasing memory/CPU limits
- Add better error handling in custom controller code
Use kubectl to watch what's happening:
kubectl get events -n <namespace> --field-selector involvedObject.name=<resource-name> --sort-by='.lastTimestamp'This shows errors from controllers trying to process the deletion. Look for:
- Webhook timeout errors
- Resource not found errors
- Permission denied errors (RBAC issue)
- Communication failures (network, DNS)
Addressing these specific errors guides the fix.
Finalizers are a double-edged sword: they prevent accidental deletion but can cause stuck resources. In production, use them sparingly and always implement proper cleanup logic with retry logic and timeout handling. For operators, implement exponential backoff before removing the finalizer. If using webhooks in finalizer logic, always set reasonable timeouts and ensure the webhook pod has resource limits. In multi-tenant clusters, stuck finalizers in one namespace can't affect others—they're scoped. Use kubelet PID and memory pressure thresholds to prevent stuck finalizers from accumulating during resource exhaustion. For GitOps (ArgoCD/Flux), configure resource finalizers carefully to avoid blocking reconciliation loops.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm