The 'Exited (139)' error occurs when a Docker container receives SIGSEGV (signal 11), a segmentation fault indicating the application attempted to access invalid memory. This typically stems from application bugs, library incompatibilities, or architecture mismatches and requires debugging at the binary level to resolve.
When Docker shows 'Exited (139)' or 'Container terminated with SIGSEGV', it means the container's main process received Signal 11 (SIGSEGV - Segmentation Violation). Exit code 139 is calculated as 128 + 11, where 128 indicates termination by signal and 11 is the SIGSEGV signal number. A segmentation fault occurs when a program attempts to access memory it doesn't have permission to access. This can happen when dereferencing null pointers, accessing freed memory (use-after-free), writing beyond array bounds (buffer overflow), or executing corrupted stack memory. The kernel immediately terminates the offending process to prevent further memory corruption or potential security exploits. Unlike out-of-memory errors (exit code 137/SIGKILL) which come from resource exhaustion, SIGSEGV indicates a programming error or binary incompatibility. The crash is deterministic and will reproduce given the same conditions.
First, confirm the exit code and check kernel logs for crash details:
# Confirm exit code 139
docker inspect --format '{{.State.ExitCode}}' <container-id>
# Output: 139
# Check container state details
docker inspect --format '{{json .State}}' <container-id> | jq
# View kernel segfault messages on the host
dmesg | tail -50 | grep -i segfault
# Or with journalctl
journalctl -k --since "10 minutes ago" | grep -i segfaultExample dmesg output:
[12345.678] myapp[1234]: segfault at 0 ip 00007f8a12345678 sp 00007ffc87654321 error 4 in libc.so.6Error codes: 4=user read, 5=user write, 6=user execute. The 'at 0' indicates null pointer dereference.
Check what the application was doing before crashing:
# View container logs (may be empty for immediate crashes)
docker logs <container-id>
docker logs --tail 100 <container-id>
# Check if the container ever started successfully
docker ps -a --filter 'name=<container-name>'
# Look at container events
docker events --filter 'container=<container-id>' --since 1hIf logs are empty, the crash likely occurs during initialization. If there are logs before the crash, note what operation triggered it (startup, specific request, data processing).
Architecture mismatch is a common cause of SIGSEGV. Verify the binary matches the runtime:
# Check host architecture
uname -m
# Output: x86_64 or aarch64
# Check image architecture
docker image inspect <image> --format '{{.Architecture}}'
# Check binary inside container
docker run --rm -it <image> file /path/to/binary
# Expected: ELF 64-bit LSB executable, x86-64
# Force specific platform when running
docker run --platform linux/amd64 <image>
# Build for specific platform
docker build --platform linux/amd64 -t myapp:amd64 .
docker build --platform linux/arm64 -t myapp:arm64 .On Apple Silicon Macs, x86_64 containers run via Rosetta emulation which can cause segfaults in certain applications.
Segfaults often occur when binaries are linked against different library versions:
# Enter container to inspect
docker run --rm -it <image> /bin/sh
# Check shared library dependencies
ldd /path/to/binary
# Look for missing libraries
ldd /path/to/binary | grep 'not found'
# Check glibc version
ldd --versionAlpine vs Debian issue: Alpine uses musl libc, not glibc. Binaries compiled on glibc systems may segfault:
# Instead of Alpine
FROM python:3.11-alpine
# Use Debian-based image
FROM python:3.11-slim
# Or add glibc compatibility to Alpine
RUN apk add --no-cache libc6-compatRebuild any native extensions from source on the target platform.
Determine if the issue is container-specific:
# Run interactively to test
docker run --rm -it <image> /bin/bash
# Inside container, run the application manually
./your-application
# If available, test on host directly
./your-applicationCompare results:
- Crashes in both: Bug in application code
- Only crashes in Docker: Container environment issue (missing libs, wrong arch, permissions)
- Only crashes on some hosts: Platform-specific incompatibility
Core dumps capture the crash state for post-mortem analysis:
# On host: enable core dumps
ulimit -c unlimited
echo '/tmp/core.%e.%p.%t' | sudo tee /proc/sys/kernel/core_pattern
# Run container with core dump support
docker run --rm -it \
--ulimit core=-1 \
--cap-add SYS_PTRACE \
-v /tmp:/tmp \
<image> /bin/bash
# Run the application
./your-application
# After crash, core dump appears in /tmp
ls -la /tmp/core.*Analyze the core dump:
# Install gdb if needed
apt-get update && apt-get install -y gdb
# Load core dump
gdb /path/to/binary /tmp/core.myapp.12345.1234567890
# Get backtrace
(gdb) bt
(gdb) bt full
(gdb) info registersUse debugging tools to identify the exact crash location:
# Run under GDB
docker run --rm -it --cap-add SYS_PTRACE <image> /bin/bash
apt-get update && apt-get install -y gdb
gdb ./your-application
(gdb) run
# After crash:
(gdb) bt # Backtrace
(gdb) frame 0 # Examine crash frame
(gdb) list # Show source (if debug symbols present)For C/C++ applications, rebuild with AddressSanitizer:
# Compile with ASan
gcc -fsanitize=address -g -o myapp myapp.c
# Run - ASan provides detailed memory error reports
./myapp
# Example ASan output:
# ==12345==ERROR: AddressSanitizer: SEGV on unknown address
# ==12345==The signal is caused by a READ memory access.For Python with native extensions:
import faulthandler
faulthandler.enable()
# Now Python will print traceback on SIGSEGVOutdated images or dependencies can cause segfaults:
# Update base image to latest stable
FROM node:20-slim # Instead of node:18
FROM python:3.12-slim # Instead of python:3.9
# Update all packages
RUN apt-get update && apt-get upgrade -y && apt-get clean
# For Python - force reinstall native packages
RUN pip install --no-cache-dir --force-reinstall numpy pandas
# For Node.js - rebuild native modules
RUN npm rebuild
# Build from source to match container environment
RUN pip install --no-binary :all: cryptographyPull the latest image version:
docker pull <image>:latest
docker build --no-cache -t myapp .Certain older images have compatibility issues with WSL2 or newer Docker Desktop versions:
# Check if running under WSL2
cat /proc/version | grep -i microsoft
# Try vsyscall emulation for older binaries
# Create %userprofile%\.wslconfig:
[wsl2]
kernelCommandLine = vsyscall=emulate
# Restart WSL
wsl --shutdown
# Then restart Docker DesktopUpgrade old base images that may have kernel compatibility issues:
# Avoid very old images
FROM centos:7 # Instead of centos:6
FROM ubuntu:22.04 # Instead of ubuntu:16.04If the issue started after a Docker Desktop update, try rolling back to the previous version temporarily.
While rare, security profiles can occasionally cause crashes:
# Test without seccomp (usually causes EPERM, not SIGSEGV)
docker run --security-opt seccomp=unconfined <image>
# Test without AppArmor
docker run --security-opt apparmor=unconfined <image>
# Add all capabilities for testing
docker run --privileged <image>Security warning: These flags bypass important security controls. Use only for debugging. If one of these resolves the issue, investigate which specific syscall or capability is needed and create a minimal custom profile rather than disabling security entirely.
Persistent, hard-to-reproduce segfaults may indicate hardware issues:
# Check for memory errors in kernel log
dmesg | grep -i 'memory\|hardware\|mce'
# Run memory test (requires reboot)
# Boot into memtest86+ from BIOS/UEFI
# Check disk health
sudo smartctl -a /dev/sda
# Monitor system during container run
watch -n 1 'free -m; vmstat 1 1'If the same container runs on other hosts without issues, focus investigation on:
- Host RAM (run memtest86+)
- CPU overheating or throttling
- Kernel version differences
- Docker version differences
Understanding segfault details: The dmesg output provides critical information: 'segfault at <address>' shows the memory location accessed. Address '0' indicates null pointer dereference. The 'error' field is a bitfield: bit 0=protection fault (vs no page), bit 1=write (vs read), bit 2=user mode (vs kernel). Common values: 4=user read of unmapped page, 6=user write of unmapped page.
Alpine and musl libc: Alpine Linux uses musl libc instead of glibc for smaller image sizes. However, pre-compiled binaries from most sources (pip wheels, npm native modules, Go cgo binaries) are built against glibc. Running these on Alpine causes segfaults at runtime due to ABI incompatibility. Solutions: use glibc-based images (Debian, Ubuntu), add 'libc6-compat' package to Alpine, or rebuild everything from source on Alpine.
QEMU and cross-architecture builds: When building ARM64 images on x86_64 (or vice versa), QEMU user-space emulation handles the translation. Some applications, especially those with threading, JIT compilation, or memory-mapped I/O, may crash under emulation. Use native build environments (buildx with remote builders or GitHub Actions ARM runners) for problematic builds.
Java and JVM segfaults: JVM segfaults (hs_err_pid*.log files) are usually JVM bugs or native library issues. Check container memory limits - insufficient heap + metaspace can corrupt memory. Ensure -Xmx doesn't exceed container memory limit. Update to latest JDK patch version.
Kubernetes debugging: In Kubernetes, exit code 139 triggers CrashLoopBackOff. Use kubectl describe pod for events, kubectl logs --previous for crashed container logs. For live debugging, use ephemeral debug containers: kubectl debug -it <pod> --image=busybox --target=<container>. Consider sidecar containers with debugging tools for persistent issues.
Reproducing intermittent crashes: Memory corruption bugs may only manifest under specific conditions. Use stress testing to reproduce: run multiple container instances, apply load, vary input data. AddressSanitizer (ASan) and Valgrind can detect memory errors that don't immediately crash but corrupt memory leading to later segfaults.
unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker