CrashLoopBackOff indicates a container is repeatedly crashing and restarting. Kubernetes applies exponential backoff delays between restarts while you diagnose the underlying application or configuration issue.
CrashLoopBackOff is not an error itself but a pod state indicating that a container is stuck in a restart loop. When a container crashes, Kubernetes automatically restarts it according to the pod's restart policy. If the container keeps crashing, Kubernetes implements exponential backoff delays (10s, 20s, 40s, up to 5 minutes) between restart attempts to prevent resource exhaustion. This state appears when something fundamental prevents the container from starting successfully—whether it's a misconfigured application, missing dependencies, resource constraints, or probe failures. The actual root cause requires investigation through logs and pod events.
Get an overview of the pod state and any cluster events:
kubectl get pods
kubectl describe pod <pod-name>Look at the Events section for clues about why the container is failing. Check the Last State field for exit codes—137 indicates OOMKilled, 1 typically means application error.
Since the container keeps crashing, you need to see logs from the previous instance:
kubectl logs <pod-name> --previousFor multi-container pods, specify the container:
kubectl logs <pod-name> -c <container-name> --previousThese logs often reveal the exact error causing the crash.
Check if the container is being OOMKilled due to memory limits:
kubectl describe pod <pod-name> | grep -A 5 'Last State'If you see "OOMKilled" or exit code 137, increase memory limits in your deployment:
resources:
limits:
memory: "512Mi"
requests:
memory: "256Mi"Aggressive liveness probes can kill containers before they finish starting:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60 # Give app time to start
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3Increase initialDelaySeconds if your application needs more startup time.
If logs don't reveal the issue, start a debugging session by overriding the container command:
command: ["sleep", "infinity"]Then exec into the container to investigate:
kubectl exec -it <pod-name> -- /bin/shManually run the application command to see real-time errors.
Check that all required configuration is present:
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].env}' | jqVerify secrets and configmaps exist:
kubectl get secrets
kubectl get configmapsMissing or empty environment variables are a common cause of startup failures.
On GKE, use the interactive debugging playbook in Cloud Console for guided troubleshooting. For EKS, check IAM role attachments if your application connects to AWS services. In AKS, verify your container image architecture matches the cluster node architecture (AMD64 vs ARM64).
For Java applications, remember that JVM heap settings don't account for all memory usage—native threads and off-heap memory can push containers over their limits. Set container memory limits higher than JVM max heap.
In CI/CD pipelines, CrashLoopBackOff often indicates missing secrets that exist in production but not in the deployment environment. Use kubectl diff to compare configurations across environments.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm