Exit code 137 indicates your Docker container received SIGKILL (signal 9). This typically happens due to out-of-memory (OOM) conditions, manual termination via docker stop/kill, or the container exceeding its shutdown grace period. The fix depends on identifying whether the kill was memory-related or externally triggered.
Exit code 137 in Docker means your container process was terminated by SIGKILL, the most forceful way to stop a process in Linux. The exit code is calculated as 128 + signal number, and since SIGKILL is signal 9, the result is 137. There are several scenarios that lead to this exit code: 1. **Out of Memory (OOM)**: The most common cause. When your container exceeds its memory limit or the host runs out of memory, the Linux kernel's OOM killer terminates the process with SIGKILL. 2. **Manual Termination**: Someone (or an automated system) explicitly killed the container using `docker stop`, `docker kill`, or the Linux `kill -9` command. 3. **Graceful Shutdown Timeout**: When `docker stop` is issued, Docker sends SIGTERM first. If the container doesn't exit within the grace period (default 10 seconds), Docker escalates to SIGKILL. 4. **Orchestrator Intervention**: Kubernetes, Docker Swarm, or CI/CD pipelines may terminate containers that fail health checks, exceed resource limits, or during rolling deployments. Understanding which scenario applies to your situation is crucial for implementing the correct fix.
First, determine if memory exhaustion caused the termination:
docker inspect --format='{{.State.OOMKilled}}' <container_name_or_id>If this returns true, the container was killed by the OOM killer. You can also check the full state:
docker inspect --format='{{json .State}}' <container_name_or_id> | jqLook for:
- "OOMKilled": true - Confirms OOM as the cause
- "ExitCode": 137 - Confirms SIGKILL termination
If OOMKilled is false, the container was killed externally (docker stop/kill, orchestrator, etc.).
Check system logs for OOM events or other termination signals:
# Check kernel messages for OOM killer activity
dmesg | grep -i "killed process"
# Check system logs
journalctl -u docker --since "1 hour ago" | grep -i kill
# View Docker daemon events
docker events --since 30m --filter event=dieCheck your container logs:
docker logs <container_name_or_id> --tail 100Look for patterns:
- Logs end abruptly without shutdown messages = SIGKILL (no chance to log)
- "Received SIGTERM, shutting down..." followed by timeout = Graceful shutdown timeout
- Memory-related warnings from your application = Approaching OOM
If OOM is suspected, analyze your container's memory consumption:
# Real-time memory stats
docker stats <container_name_or_id>
# Check memory limit set on container
docker inspect --format='{{.HostConfig.Memory}}' <container_name_or_id>A value of 0 means no limit is set. To see the limit in human-readable form:
docker inspect <container_name_or_id> | grep -A5 '"Memory"'Watch the MEM USAGE / LIMIT column in docker stats. If usage approaches the limit before crashes, OOM is likely the cause.
If OOM is confirmed, increase the container's memory allocation:
# Run with higher memory limit
docker run --memory=2g --memory-swap=2g <image>For Docker Compose:
services:
myapp:
image: myimage:latest
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 1GFor Docker Desktop (Windows/Mac): Go to Settings > Resources and increase the Memory allocation for the Docker VM.
Important: Also tune your application's runtime:
- Java: -XX:MaxRAMPercentage=75.0
- Node.js: NODE_OPTIONS="--max-old-space-size=1536"
- Go: GOMEMLIMIT=1500MiB
If your container is being killed due to shutdown timeout, implement proper signal handling:
Node.js example:
process.on('SIGTERM', () => {
console.log('Received SIGTERM, shutting down gracefully...');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
});Python example:
import signal
import sys
def handle_sigterm(signum, frame):
print("Received SIGTERM, shutting down...")
# Cleanup code here
sys.exit(0)
signal.signal(signal.SIGTERM, handle_sigterm)Shell script fix using exec:
# Instead of
CMD ["./start.sh"]
# Use exec to replace shell process
CMD ["./start.sh"]
# In start.sh:
exec python app.py # exec ensures signals reach the appIf your application needs more time for graceful shutdown:
# Stop with longer timeout (30 seconds)
docker stop --time=30 <container_name_or_id>For Docker Compose:
services:
myapp:
image: myimage:latest
stop_grace_period: 30sOr use the command line:
docker-compose stop --timeout 30Note: Very long timeouts can slow down deployments and restarts. Aim to make your application shut down faster rather than extending timeouts indefinitely.
If running in Kubernetes or CI/CD, check those systems:
Kubernetes: Check if the pod was evicted or terminated:
kubectl describe pod <pod_name>
kubectl get events --field-selector involvedObject.name=<pod_name>Look for:
- OOMKilled reason in container status
- Eviction events due to node pressure
- Failed liveness/readiness probes
CI/CD pipelines: Check for:
- Job timeout settings
- Pipeline cancellation
- Cleanup policies
Docker Swarm:
docker service ps <service_name> --no-truncUnderstanding exit code 137 calculation: In Unix/Linux, when a process is terminated by a signal, its exit code equals 128 plus the signal number. SIGKILL is signal 9, so 128 + 9 = 137. Other common signal-related exit codes:
- 137 = SIGKILL (128 + 9)
- 143 = SIGTERM (128 + 15)
- 139 = SIGSEGV (128 + 11, segmentation fault)
OOMKilled false but still exit 137: If docker inspect shows OOMKilled: false with exit code 137, the kill came from outside Docker's memory limits:
- The Linux kernel OOM killer may have killed it (check dmesg)
- System-wide memory pressure triggered the kernel OOM (different from Docker's cgroup OOM)
- An external process sent SIGKILL
Differentiating OOM sources: Docker only sets OOMKilled: true when the container hits its cgroup memory limit. If the entire host runs out of memory, the kernel may kill the container without Docker knowing - check dmesg for "Out of memory" messages.
Using --oom-kill-disable (use with caution):
docker run --oom-kill-disable --memory=1g <image>This prevents OOM killing but the container will hang when memory is exhausted. Only use with strict memory limits set.
PID 1 problem: In containers, your application runs as PID 1 which has special signal handling. Some applications don't handle signals properly as PID 1. Solutions:
- Use --init flag: docker run --init <image>
- Use tini or dumb-init as entrypoint
- Ensure your app explicitly handles SIGTERM
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker
cannot open '/etc/passwd': Permission denied
How to fix 'cannot open: Permission denied' in Docker
Error response from daemon: failed to create the ipvlan port
How to fix 'failed to create the ipvlan port' in Docker
toomanyrequests: Rate exceeded for anonymous users
How to fix 'Rate exceeded for anonymous users' in Docker Hub