This error occurs when a Kubernetes Deployment fails to reach its desired state within progressDeadlineSeconds (default 10 minutes). Pods may be stuck due to image pull failures, scheduling issues, or resource constraints.
In Kubernetes, a Deployment or ReplicaSet must progress toward its desired state within a specified deadline. The progressDeadlineSeconds field (default 600 seconds or 10 minutes) defines how long the controller will wait before marking the rollout as failed with a ProgressDeadlineExceeded condition. When this error occurs, it means that the ReplicaSet controller has been unable to create or schedule the required replicas within the timeout window. This is a symptom of an underlying issue—such as failed image pulls, insufficient resources, scheduling constraints, or API server communication problems—that prevents pods from reaching the Ready state. Kubernetes does not automatically take corrective action when this deadline is exceeded. It simply reports the condition and continues attempting the rollout. Higher-level orchestrators (like Argo Rollouts) can use this signal to trigger automated rollbacks, but the base Deployment will keep retrying indefinitely until the root cause is resolved.
Get a detailed view of your deployment to see the exact condition and status:
kubectl describe deployment <deployment-name> -n <namespace>Look for the Conditions section which will show:
- Type: Progressing, Status: False, Reason: ProgressDeadlineExceeded
- The Message field often hints at the root cause
Also check the ReplicaSet status:
kubectl get replicaset -n <namespace>
kubectl describe replicaset <replicaset-name> -n <namespace>Examine the pods created by the ReplicaSet to identify why they're not progressing:
kubectl get pods -n <namespace> -l app=<app-label>
kubectl describe pod <pod-name> -n <namespace>In the Events section, look for:
- ImagePullBackOff: Image cannot be pulled
- FailedScheduling: Pod cannot be scheduled
- CreateContainerConfigError: Missing secrets or configmaps
- CrashLoopBackOff: Container starts but immediately exits
Check pod logs:
kubectl logs <pod-name> -n <namespace> --previous
kubectl logs <pod-name> -n <namespace>If you see ImagePullBackOff errors, validate that the container image exists and is accessible:
# Check the image reference in your deployment
kubectl get deployment <deployment-name> -n <namespace> -o jsonpath='{.spec.template.spec.containers[*].image}'
# Verify imagePullSecrets are configured
kubectl get deployment <deployment-name> -n <namespace> -o jsonpath='{.spec.template.spec.imagePullSecrets}'If the secret is missing, create it:
kubectl create secret docker-registry <pull-secret-name> \
--docker-server=<registry-url> \
--docker-username=<username> \
--docker-password=<password> \
-n <namespace>Verify that your cluster has sufficient resources and that no quotas are blocking pod creation:
# Check node resources
kubectl top nodes
kubectl describe nodes | grep -A 5 'Allocated resources'
# Check resource quotas in your namespace
kubectl describe resourcequota -n <namespace>If resources are constrained, either increase the quota or reduce pod resource requests:
kubectl set resources deployment <deployment-name> \
--requests=cpu=100m,memory=128Mi \
-n <namespace>If pods are created but not becoming Ready, the issue is often readiness probes failing:
# Check readiness probe configuration
kubectl get deployment <deployment-name> -n <namespace> -o jsonpath='{.spec.template.spec.containers[*].readinessProbe}' | jq .Adjust the probe settings for slow-starting applications:
spec:
template:
spec:
containers:
- name: app
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30 # Give app time to start
timeoutSeconds: 5
periodSeconds: 10
failureThreshold: 3If the root cause is just slow startup, extend the progress deadline:
# Increase deadline to 20 minutes
kubectl patch deployment <deployment-name> -n <namespace> -p '{"spec":{"progressDeadlineSeconds":1200}}'If you need to rollback:
# Check rollout history
kubectl rollout history deployment <deployment-name> -n <namespace>
# Rollback to the previous revision
kubectl rollout undo deployment <deployment-name> -n <namespace>
# Verify the rollback
kubectl rollout status deployment <deployment-name> -n <namespace> --timeout=300sBy default, Kubernetes uses the RollingUpdate strategy, which gradually replaces old pods. The progressDeadlineSeconds applies to the entire rollout window. When using Argo Rollouts, you get progressDeadlineAbort which stops and rolls back immediately upon exceeding the deadline, whereas native Kubernetes Deployments continue retrying.
The minReadySeconds field defines how long a pod must be Ready before it counts toward the desired state. Setting this too high combined with slow readiness probes can cause artificial timeout conditions. Ensure minReadySeconds < progressDeadlineSeconds.
For debugging, kubectl rollout status also has its own timeout (2 minutes default) separate from progressDeadlineSeconds. Use --timeout=1h with this command to avoid client-side timeout masking the actual issue.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm