This error occurs when a Docker container exceeds its memory limit and the Linux kernel's OOM (Out of Memory) killer terminates it. The fix involves setting appropriate memory limits, optimizing application memory usage, and configuring swap settings.
The "OOM Killed" error in Docker indicates that the Linux kernel's Out-of-Memory (OOM) killer has terminated a process inside your container because it exceeded the available memory. This is a critical protection mechanism that prevents a single container from consuming all system memory and crashing the entire host. When a container hits its memory limit (set via --memory flag) or the host runs low on memory, the kernel identifies the most "expendable" process and kills it. Docker containers are prime targets because Docker adjusts their OOM priority to protect the Docker daemon and other system processes. A container killed by OOM will exit with code 137 (128 + 9, where 9 is SIGKILL). You can confirm an OOM kill by inspecting the container: the OOMKilled field will be set to true.
First, verify that OOM was the cause of the container exit:
docker inspect --format='{{.State.OOMKilled}}' <container_name>If this returns true, the container was killed by the OOM killer. Also check the exit code:
docker inspect --format='{{.State.ExitCode}}' <container_name>Exit code 137 indicates the process received SIGKILL (typically from OOM killer).
Check system logs for OOM events:
dmesg | grep -i "out of memory"
# or
grep -i "oom" /var/log/syslogMonitor your container's memory usage in real-time:
docker stats <container_name>Check the current memory limit:
docker inspect --format='{{.HostConfig.Memory}}' <container_name>This returns bytes (0 means no limit). To see human-readable format:
docker inspect <container_name> | grep -A 5 "Memory"Run your application under load and observe peak memory usage to understand actual requirements.
Set an appropriate memory limit when running the container:
docker run --memory=1g --memory-swap=1g <image>Common memory flags:
- --memory or -m: Hard limit on memory usage
- --memory-swap: Total memory + swap limit (set equal to --memory to disable swap)
- --memory-reservation: Soft limit for memory reservation
Example with a 2GB limit:
docker run -d \
--name myapp \
--memory=2g \
--memory-swap=2g \
--memory-reservation=1g \
myimage:latestFor Docker Compose, add to your service:
services:
myapp:
image: myimage:latest
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 1GMany runtimes need explicit configuration to respect container limits:
Node.js: Set max heap size to ~75% of container memory:
docker run --memory=1g -e NODE_OPTIONS="--max-old-space-size=768" node-appJava (JDK 8u191+): Enable container awareness:
docker run --memory=1g \
-e JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0" \
java-appPython: For memory-intensive operations, consider using generators and chunked processing. Set process limits:
import resource
resource.setrlimit(resource.RLIMIT_AS, (1024*1024*1024, 1024*1024*1024))Go: Go respects cgroups since Go 1.19. Set GOMEMLIMIT:
docker run --memory=1g -e GOMEMLIMIT=750MiB go-appAllow some swap to handle temporary memory spikes:
docker run --memory=1g --memory-swap=2g <image>This gives 1GB RAM + 1GB swap. The container can burst to 2GB total, with excess going to swap.
Note: Swap is slower than RAM and may cause performance issues. Use it as a buffer, not a replacement for adequate memory.
For Docker Desktop, enable swap in Settings > Resources.
To check current swap configuration:
docker info | grep -i swapIf the container keeps getting OOM killed despite adequate limits, investigate memory leaks:
Profile your application: Use language-specific tools:
- Node.js: --inspect flag with Chrome DevTools
- Java: JVisualVM, Eclipse MAT, or async-profiler
- Python: memory_profiler, tracemalloc
- Go: pprof
Check for common leak sources:
- Unbounded caches or queues
- Event listeners not being removed
- Database connection pools growing indefinitely
- Large objects held in memory longer than needed
Add application-level memory limits:
// Node.js example: limit cache size
const LRU = require('lru-cache');
const cache = new LRU({ max: 500 }); // max 500 itemsMonitor memory over time to identify gradual growth patterns.
Set up proactive monitoring to catch issues before OOM:
Using docker stats in a script:
#!/bin/bash
while true; do
docker stats --no-stream --format "{{.Name}}: {{.MemPerc}}" | \
while read line; do
pct=$(echo $line | grep -oP '[0-9.]+(?=%)')
if (( $(echo "$pct > 80" | bc -l) )); then
echo "WARNING: $line"
fi
done
sleep 10
doneWith Docker Compose healthchecks:
healthcheck:
test: ["CMD", "sh", "-c", "test $(cat /sys/fs/cgroup/memory/memory.usage_in_bytes) -lt 900000000"]
interval: 30s
timeout: 10s
retries: 3Consider using Prometheus + cAdvisor for production monitoring.
Understanding cgroups and OOM: Docker uses Linux cgroups to enforce memory limits. When memory.limit_in_bytes is exceeded, the kernel's OOM killer is triggered. You can check cgroup memory stats:
cat /sys/fs/cgroup/memory/docker/<container_id>/memory.statOOM Score Adjustment: Docker sets containers with a higher OOM score than the daemon, making containers more likely to be killed. You can adjust this:
docker run --oom-score-adj=-500 <image> # Less likely to be killedValid range: -1000 (never kill) to 1000 (kill first). Use with caution.
Disabling OOM Killer (not recommended): You can disable OOM killing for a container:
docker run --oom-kill-disable --memory=1g <image>Warning: Only use with a memory limit set. Without a limit, the container can consume all host memory, potentially crashing the entire system. The container will hang rather than be killed.
Cgroup v2 and memory.oom.group: On systems with cgroup v2, you can configure OOM behavior to kill all processes in a container (instead of just one):
echo 1 > /sys/fs/cgroup/docker/<container_id>/memory.oom.groupKubernetes considerations: In Kubernetes, OOM killed containers show as OOMKilled in pod status. Set resource requests and limits appropriately:
resources:
requests:
memory: "512Mi"
limits:
memory: "1Gi"Using oomd for proactive management: Consider enabling systemd-oomd or Facebook's oomd to proactively kill processes before the kernel OOM killer is triggered, allowing for more graceful handling.
unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker