Exit code 1 is a generic application error indicating the container process failed. Check container logs for the specific error message and fix the underlying application or configuration issue.
Exit code 1 is a general-purpose error code indicating that a container's main process terminated with a failure. Unlike specific codes like 137 (OOMKilled) or 127 (command not found), exit code 1 is a catch-all that applications use when something goes wrong. This makes exit code 1 both common and challenging to debug—the exit code itself doesn't tell you what failed. The actual cause must be found in container logs, which might show anything from configuration errors to unhandled exceptions to missing dependencies.
The logs reveal the actual error:
kubectl logs <pod-name>For crashed containers, view previous instance logs:
kubectl logs <pod-name> --previousFor multi-container pods:
kubectl logs <pod-name> -c <container-name> --previousLook for exception stack traces, error messages, or the last operation before termination.
Missing environment variables are a common cause:
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].env}' | jqCheck that referenced secrets and configmaps exist:
kubectl get secrets
kubectl get configmapsVerify secret values are not empty:
kubectl get secret <secret-name> -o jsonpath='{.data}' | jqReproduce the issue outside Kubernetes:
docker run -it <image>:<tag>If it fails locally, the issue is in the image itself. If it works locally, compare environment differences:
- Environment variables
- Volume mounts
- Network access
- Resource limits
Verify the container command is correct:
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].command}'
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].args}'Common issues:
- Shell scripts missing shebang line (#!/bin/bash)
- Scripts not marked executable
- Wrong path to entrypoint file
For debugging, override the command temporarily:
command: ["sleep", "infinity"]Then exec in and run manually to see real-time errors.
If the app connects to databases or APIs on startup:
# Test from a debug pod
kubectl run debug --rm -it --image=busybox -- /bin/sh
# Inside the pod
nslookup <service-name>
nc -zv <host> <port>Check if services are available and network policies allow access. For databases, verify credentials and connection strings.
Increase log verbosity to capture more details:
env:
- name: LOG_LEVEL
value: "debug"
- name: NODE_DEBUG
value: "*" # For Node.jsFor applications that exit quickly, add startup logging or a delay:
# In entrypoint script
echo "Starting application..."
echo "Environment: $(env)"
exec ./myappExit code 1 can also result from proper error handling—applications that detect invalid configuration and exit cleanly with code 1 are easier to debug than those that crash unpredictably.
For init containers failing with exit code 1, check that they have all required dependencies. Init containers run before the main container and must complete successfully.
In CI/CD pipelines, exit code 1 failures often indicate environment differences between staging and production. Use kubectl diff to compare manifests and ensure secrets/configmaps match.
When debugging intermittent exit code 1 failures, check if the issue correlates with specific nodes, time of day (load patterns), or after deployments. Race conditions during startup—like connecting to a database before it's ready—can cause sporadic failures that resolve with retry logic or init containers.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes