A deployment failed to progress within the specified timeout (progressDeadlineSeconds). This indicates the pods are not becoming ready, likely due to image pull failures, readiness probe failures, resource constraints, or slow application startup. Diagnose the root cause by checking pod logs, then adjust the deadline or fix the underlying issue.
The Kubernetes deployment controller tracks progress when rolling out a new version. If pods don't become ready within progressDeadlineSeconds (default 600 seconds), the deployment is marked with reason: ProgressDeadlineExceeded. This is not an automatic rollback—it's a signal that something is preventing the rollout from succeeding. The condition appears in deployment.status.conditions with type: Progressing and status: False.
View the deployment condition with more detail:
kubectl describe deployment <deployment-name> -n <namespace>Look for:
- Condition section showing ProgressDeadlineExceeded
- Message explaining why it failed
- Replicas showing how many are desired vs. ready
Inspect the pods created by the deployment:
kubectl get pods -n <namespace> -l app=<label>
kubectl describe pod <pod-name> -n <namespace>Look at:
- Pod phase (Pending, ContainerCreating, Running, Failed)
- Events section (shows image pull errors, probe failures, etc.)
- Container state (Waiting reason, exit code if Terminated)
Check application logs to see why it's not starting:
# View current logs
kubectl logs <pod-name> -n <namespace>
# View previous attempt if pod restarted
kubectl logs <pod-name> --previous -n <namespace>
# Stream logs as they appear
kubectl logs -f <pod-name> -n <namespace>Common issues: connection timeouts, missing dependencies, config errors.
Test image pull from your machine:
docker pull <image:tag>
# For private registries
echo $PASSWORD | docker login -u $USER --password-stdin <registry>
docker pull <registry>/<image:tag>If it fails, check:
- Image name and tag spelling
- Registry URL
- Private registry credentials
- Image exists in the registry
If using a private container registry:
kubectl create secret docker-registry regcred \
--docker-server=<registry> \
--docker-username=<user> \
--docker-password=<password> \
-n <namespace>Then add to your deployment spec:
spec:
template:
spec:
imagePullSecrets:
- name: regcredVerify the readiness probe is configured correctly:
kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A 20 "readinessProbe"Common issues:
- initialDelaySeconds is too low (app needs more time to start)
- timeoutSeconds is too low
- Probe is checking the wrong endpoint or port
- Application is not responding on the configured path
Increase initialDelaySeconds if app startup is slow:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5Check if pods have enough CPU and memory:
kubectl describe pod <pod-name> -n <namespace> | grep -A 5 "Requests"
kubectl describe nodes | grep -B 5 -A 5 "Allocated resources"If resources are insufficient, either:
- Reduce pod resource requests in the deployment
- Add more nodes to the cluster
- Delete other pods to free resources
- Increase node capacity
If the underlying issue is slow but legitimate startup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment-name>
spec:
progressDeadlineSeconds: 1200 # 20 minutes instead of 10
template:
spec:
containers:
- name: app
image: <image:tag>Apply the change:
kubectl apply -f deployment.yamlNOTE: Only increase this after fixing the root cause. A high deadline on a broken deployment wastes time.
ProgressDeadlineExceeded is not a fatal condition—Kubernetes will keep retrying the rollout indefinitely unless you pause the deployment. You can safely pause a deployment and resume it without triggering the deadline timer. Use kubectl rollout pause <deployment> and kubectl rollout resume <deployment>. The progressDeadlineSeconds must be greater than minReadySeconds if both are specified. For CI/CD pipelines, increase timeout appropriately or fix deployment before applying. In some cases, the deadline may be exceeded before readiness probes even run if there's an image pull delay.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm