Exit code 139 occurs when a Docker container receives SIGSEGV (signal 11), indicating a segmentation fault caused by invalid memory access. This typically results from application bugs, library incompatibilities, or hardware issues requiring debugging and code analysis to resolve.
Exit code 139 is calculated as 128 + 11 (signal number), indicating that the container's main process was terminated by SIGSEGV (Signal 11 - Segmentation Violation). A segmentation fault occurs when a program attempts to access memory that it is not allowed to access, such as reading from or writing to an invalid memory location, dereferencing null pointers, or accessing memory that has been freed. Unlike application-level errors (exit code 1) or out-of-memory kills (exit code 137), exit code 139 represents a low-level memory access violation detected by the kernel. The operating system terminates the process immediately to prevent memory corruption or security vulnerabilities. This error often points to bugs in native code, incompatible libraries, or in rare cases, hardware issues like faulty RAM.
First, inspect the host's kernel logs to find details about the segmentation fault:
# View recent kernel messages
dmesg | tail -n 50 | grep -i segfault
# Or with journalctl
journalctl -k --since "10 minutes ago" | grep -i segfault
# Example output:
# [12345.678] app[1234]: segfault at 0 ip 00007f... sp 00007ff... error 4 in libc.so.6The output shows the memory address, instruction pointer, and which library caused the fault. Error codes: 4 = user-mode read, 5 = user-mode write, 6 = user-mode execute.
Check what the application was doing before the crash:
# Get container ID
docker ps -a
# View logs (may be empty if crash was immediate)
docker logs <container-id>
# Inspect container state
docker inspect --format '{{.State.ExitCode}} {{.State.Error}} {{.State.FinishedAt}}' <container-id>Look for patterns: Does it crash during startup, under load, or when processing specific data?
Determine if the issue is container-specific by running the application directly on the host:
# Build and run natively
./your-application
# Or run in the container interactively
docker run --rm -it <image> /bin/bash
./your-applicationIf it works outside Docker but crashes inside, the issue is likely:
- Missing shared libraries
- Different glibc/musl versions
- Platform architecture mismatch
Segmentation faults often occur due to library mismatches. Verify library versions:
# Inside the container
ldd /path/to/your/binary
# Check for missing libraries
ldd /path/to/your/binary | grep 'not found'
# List library versions
ls -la /lib/ /usr/lib/For Alpine-based images, native extensions compiled against glibc may fail. Solutions:
# Switch from Alpine to Debian-based image
FROM python:3.11-slim # Instead of python:3.11-alpine
# Or install glibc compatibility layer in Alpine
RUN apk add --no-cache libc6-compatEnsure the binary matches the runtime architecture:
# Check binary architecture
file /path/to/your/binary
# Check container platform
uname -m
# Build for specific platform
docker build --platform linux/amd64 -t myimage .
# Run with explicit platform
docker run --platform linux/amd64 myimageOn Apple Silicon (M1/M2) Macs, x86_64 images run under Rosetta emulation which can cause segfaults with some applications.
Generate core dumps to analyze the crash with a debugger:
# On the host, enable core dumps
ulimit -c unlimited
echo '/tmp/core.%e.%p' | sudo tee /proc/sys/kernel/core_pattern
# Run container with elevated privileges for debugging
docker run --rm -it \
--ulimit core=-1 \
--cap-add=SYS_PTRACE \
-v /tmp:/tmp \
<image> /bin/bash
# Inside container, run the application
./your-application
# After crash, analyze with gdb
gdb /path/to/binary /tmp/core.<name>.<pid>
(gdb) bt # Print backtraceFor compiled applications, use debugging tools to identify the exact crash location:
# Install gdb in container
apt-get update && apt-get install -y gdb
# Run application under gdb
gdb --args ./your-application
(gdb) run
# When it crashes:
(gdb) bt full # Full backtrace with variablesFor C/C++ applications, rebuild with AddressSanitizer:
# Compile with ASan
gcc -fsanitize=address -g -o myapp myapp.c
# Run - ASan will print detailed memory error info
./myappFor Python applications with native extensions, install debug symbols and use faulthandler:
import faulthandler
faulthandler.enable()If the crash is in a library, update to compatible versions:
# Update all packages
RUN apt-get update && apt-get upgrade -y
# For Python - reinstall with no cache
RUN pip install --no-cache-dir --force-reinstall <package>
# For Node.js - rebuild native modules
RUN npm rebuildIf using pre-built binaries, try building from source to match your environment:
# Example: rebuild numpy from source
pip install --no-binary :all: numpyOlder base images may have compatibility issues with WSL2 or newer kernels:
# Check if running under WSL2
cat /proc/version | grep -i microsoft
# Update to newer base image
FROM centos:7 # Instead of centos:6
FROM ubuntu:22.04 # Instead of older Ubuntu versionsFor WSL2, try creating %userprofile%\.wslconfig and restart:
[wsl2]
kernelCommandLine = vsyscall=emulateThen: wsl --shutdown and restart Docker Desktop.
Rarely, seccomp or AppArmor profiles can cause issues (though usually EPERM, not SIGSEGV):
# Test without seccomp filtering
docker run --security-opt seccomp=unconfined <image>
# Test without AppArmor
docker run --security-opt apparmor=unconfined <image>Warning: Only use these flags for debugging. If this resolves the issue, create a proper custom seccomp profile rather than disabling security entirely.
Exit code 139 can be particularly challenging to debug because the crash occurs at the kernel level, often leaving minimal application-level logs. In containerized environments, the isolation can make it harder to access debugging tools and core dumps.
Memory alignment issues: Some CPU architectures require aligned memory access. Code that works on x86_64 may crash on ARM64 if it performs unaligned memory operations.
Alpine and musl libc: Alpine Linux uses musl libc instead of glibc. Binaries compiled against glibc may crash with segfaults when run on Alpine. This is especially common with Python packages containing C extensions, Go binaries using cgo, or pre-compiled Node.js native modules. Either use a glibc-based image (Debian, Ubuntu) or rebuild native dependencies on Alpine.
QEMU emulation: When running containers for a different architecture (e.g., ARM containers on x86 via QEMU), some applications may experience segfaults due to emulation limitations or timing-sensitive code.
Hardware verification: In persistent cases, especially in production, run memory tests (memtest86+) to rule out faulty RAM. Memory corruption from hardware issues can cause seemingly random segfaults that are difficult to reproduce or debug.
Kubernetes considerations: In Kubernetes, exit code 139 will cause pod restarts with CrashLoopBackOff. Use kubectl describe pod to view events and kubectl logs --previous to see logs from crashed containers. Consider using sidecar containers with debugging tools or ephemeral containers (kubectl debug) for live debugging.
unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker