Exit code 137 indicates your Docker container was forcibly terminated by the system's out-of-memory (OOM) killer. This occurs when a container exceeds its allocated memory limit or the host system runs out of available memory.
Exit code 137 means your container received a SIGKILL signal (signal 9) and was terminated immediately. In Unix/Linux systems, when a process exits due to a signal, the exit code equals 128 plus the signal number, so 128 + 9 = 137. This exit code most commonly appears when the Linux kernel's OOM (Out of Memory) killer terminates your container because it consumed too much memory. The kernel prioritizes system stability over individual processes, so when memory runs critically low, it will kill memory-hungry processes to prevent system-wide crashes. Docker's daemon is configured with a higher OOM priority, making individual containers more likely to be killed than the Docker daemon itself. When you see "OOMKilled: true" in docker inspect output alongside exit code 137, you know the memory limit was the culprit.
First, confirm that memory was actually the issue by inspecting the container:
docker inspect -f '{{.State.OOMKilled}}' <container_id>If this returns true, the OOM killer was responsible. Also check the exit code:
docker inspect -f '{{.State.ExitCode}}' <container_id>You can also check system logs for OOM events:
# On systemd systems
journalctl -k | grep -i "killed process"
# Or check dmesg
dmesg | grep -i "out of memory"Before making changes, understand your container's actual memory needs:
# Real-time memory stats for all containers
docker stats
# For a specific container
docker stats <container_name>Watch the MEM USAGE and LIMIT columns. If usage consistently approaches the limit, you need to increase allocation.
Set an appropriate memory limit using the --memory flag:
# Allocate 1GB of memory
docker run --memory=1g --name myapp myapp:latest
# Combine with memory-swap to prevent swap usage
docker run --memory=1g --memory-swap=1g --name myapp myapp:latestFor Docker Compose, add resource limits to your service:
services:
myapp:
image: myapp:latest
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 512MOn macOS and Windows, Docker runs in a VM with limited memory allocation:
Docker Desktop:
1. Open Docker Desktop
2. Go to Settings/Preferences
3. Select Resources → Advanced
4. Increase the Memory slider (e.g., from 2GB to 4GB or more)
5. Click "Apply & Restart"
This is often the quickest fix if you're developing locally and see exit code 137.
If increasing limits only temporarily fixes the issue, your application likely has a memory leak:
For Node.js applications:
# Generate heap snapshots
node --inspect app.jsUse Chrome DevTools to analyze memory over time.
For Java applications:
# Enable JVM memory profiling
docker run --memory=1g \
-e JAVA_OPTS="-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/dumps" \
myapp:latestFor Python applications:
# Use memory_profiler
pip install memory_profiler
python -m memory_profiler app.pyCommon leak sources: unclosed database connections, circular references, unbounded caches, and accumulating event listeners.
Reduce memory footprint through code optimization:
- Process data in streams/chunks rather than loading entire datasets
- Implement proper connection pooling with max limits
- Use pagination for large database queries
- Clear unused objects and close file handles
- Implement LRU caches with size limits
- Avoid keeping large objects in memory unnecessarily
Example for Node.js streams:
// Bad: loads entire file into memory
const data = fs.readFileSync('large-file.txt');
// Good: processes in chunks
fs.createReadStream('large-file.txt')
.pipe(processStream)
.pipe(fs.createWriteStream('output.txt'));Understanding "Invisible" OOM Kills: In Kubernetes, if the OOM killer selects a process other than PID 1 inside your container, the container won't be marked as OOMKilled and won't restart. Only when the init process (PID 1) is killed does Kubernetes detect the OOM event. Monitor kernel logs (dmesg) to catch these invisible kills.
The --oom-kill-disable Flag: You can disable the OOM killer for a container with --oom-kill-disable, but this is dangerous. Only use it in combination with --memory limits, otherwise the container can freeze the host system. Note that this flag is not supported on cgroups v2.
Memory Swap Behavior: The --memory-swap flag controls how much swap space a container can use. Setting it equal to --memory disables swap entirely for that container. While swap can prevent OOM kills, it severely degrades performance. It's better to allocate sufficient RAM than rely on swap.
Exit Code 137 Without OOM: Exit code 137 can also occur if you run docker stop and your application doesn't handle SIGTERM gracefully within the 10-second timeout. Docker sends SIGTERM, waits, then sends SIGKILL (resulting in exit 137). To diagnose this, check if OOMKilled is false in docker inspect.
Host-Level Memory Pressure: Even if your container is under its limit, the host's overall memory pressure can trigger the OOM killer. Use docker system df and free -h to check host resources. Consider distributing containers across multiple hosts or upgrading host memory.
unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker