The "Waiting for rollout to finish" message appears when kubectl rollout status cannot complete a deployment rollout within the expected timeframe. This blocks CI/CD pipelines and indicates your deployment is stuck due to pod failures, resource constraints, or timeout issues.
When you run `kubectl rollout status deployment/<name>`, kubectl waits for all new replicas to become ready and old replicas to be terminated. If this doesn't complete, you see "Waiting for rollout to finish" with the current replica counts (e.g., "2 of 3 updated replicas are available"). This happens when new pods fail to start, old pods don't shut down, or progress exceeds the `progressDeadlineSeconds` timeout (default 10 minutes). The rollout is blocked at the pod level—either new pods aren't becoming ready, or old replicas aren't being replaced.
Get detailed information:
kubectl rollout status deployment/<name>
kubectl describe deployment <name>
kubectl get pods -o wide
kubectl get events -n <namespace> --sort-by='.lastTimestamp'Look for pods in Pending, ImagePullBackOff, or CrashLoopBackOff. The Events section shows why pods aren't starting.
For each non-running pod:
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs <pod-name> --previous # If crashed, see previous logsCommon issues shown in describe output:
- "Pending": Check "Events" section for resource/scheduling issues
- "ImagePullBackOff": Image pull secret missing or credentials wrong
- "CrashLoopBackOff": Application exits immediately (see logs)
- "Not Ready": Readiness probe failing
Check if resource constraints are blocking:
kubectl top nodes
kubectl describe nodes
kubectl top pods -n <namespace>Look for:
- CPU/memory pressure on nodes
- "Insufficient cpu" or "Insufficient memory" in pod events
- Disk pressure preventing pod scheduling
If constrained, scale down other deployments or add nodes.
If rollout takes longer than 10 minutes (default progressDeadlineSeconds), extend it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
progressDeadlineSeconds: 1800 # 30 minutes
# ... rest of specApply: kubectl apply -f deployment.yaml. This doesn't fix the issue but gives slow rollouts time to complete.
If readiness probes are timing out, make them more lenient:
spec:
containers:
- name: app
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30 # Wait before first check
timeoutSeconds: 5
periodSeconds: 10
failureThreshold: 3Increase initialDelaySeconds to allow app startup time. Test probe locally:
kubectl exec -it <pod> -- curl http://localhost:8080/healthCheck image configuration:
kubectl get deploy <name> -o yaml | grep -A2 image:Test pull:
docker pull <image:tag>For private registries, ensure imagePullSecrets exist:
kubectl get secret <secret-name>
kubectl describe secret <secret-name>Create if missing:
kubectl create secret docker-registry regcred \
--docker-server=myregistry.com \
--docker-username=user \
--docker-password=passIf old pods don't terminate, preStop hooks may be hanging:
spec:
containers:
- name: app
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]Review current setup:
kubectl get deploy <name> -o yaml | grep -A10 lifecycle:Make sure preStop commands complete quickly. Also check terminationGracePeriodSeconds (default 30s):
spec:
terminationGracePeriodSeconds: 60 # Increase if cleanup takes timeIf the rollout is completely blocked, restart the deployment:
kubectl rollout restart deployment/<name>This terminates old pods and starts fresh replicas. Monitor:
kubectl rollout status deployment/<name>If this also hangs, the issue is pod-level (resource/image/probe), not rollout-specific.
Track rollout progress in real-time:
kubectl get pods -n <namespace> -w # Watch mode, updates liveIn another terminal:
kubectl describe pod <pod-name> # See latest eventsWatch for status changes: Pending → Running → Ready. If pod stays Pending, check node describe output for "Insufficient" messages.
In Docker Desktop Kubernetes or Minikube with limited resources, consider reducing replica count for testing: kubectl scale deployment <name> --replicas=1. For Helm upgrades, set --wait flag to monitor rollout automatically: helm upgrade <release> <chart> --wait. In CI/CD (GitHub Actions, GitLab CI), increase pipeline timeout to match Kubernetes progressDeadlineSeconds. For slow applications, readiness probes may need 30+ seconds initialDelay. WSL2 may have slower disk I/O affecting image pulls and pod startup—increase timeouts accordingly. Multi-zone clusters may have inter-zone latency affecting pod scheduling. ArgoCD and Flux can monitor rollout completion with Argo notification plugins.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm