This error occurs when Docker cannot create or start containers because the host filesystem has run out of disk space. The fix involves cleaning up unused Docker resources with prune commands or expanding storage capacity.
The "Error response from daemon: no space left on device" error indicates that the Docker daemon cannot complete an operation due to insufficient disk space on the host system. This typically happens when creating containers, pulling images, or writing data to volumes. Docker stores all its data in the /var/lib/docker directory on Linux (or within a virtual disk on Docker Desktop for Mac/Windows). This includes: - Container layers and writable container filesystems - Downloaded and built images - Named and anonymous volumes - Build cache from docker build operations - Container logs When the filesystem hosting this directory becomes full, the Docker daemon fails with this error. The daemon response format ("Error response from daemon:") indicates this is a server-side error originating from the Docker Engine rather than the client. Over time, Docker environments accumulate unused resources. Stopped containers, dangling images, orphaned volumes, and build cache can consume tens of gigabytes. Without regular cleanup, even large disks eventually fill up.
Analyze how Docker is consuming disk space:
docker system dfThis shows space used by images, containers, local volumes, and build cache. Example output:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 25 5 8.5GB 6.2GB (72%)
Containers 10 2 2.1GB 1.8GB (85%)
Local Volumes 15 3 4.3GB 3.9GB (90%)
Build Cache 0 0 5.1GB 5.1GBFor a detailed breakdown of each item:
docker system df -vAlso check host disk space:
df -h /var/lib/dockerThe quickest fix is Docker's built-in prune command that removes unused resources:
docker system pruneThis removes:
- All stopped containers
- All networks not used by at least one container
- All dangling images (untagged images)
- All dangling build cache
For a more aggressive cleanup that removes ALL unused images (not just dangling):
docker system prune -aAdd -f to skip confirmation:
docker system prune -afNote: The -a flag will remove all images not associated with a running container. You'll need to re-pull them later.
By default, docker system prune does NOT remove volumes to prevent data loss. Remove unused volumes separately:
docker volume pruneOr include volumes in the system prune:
docker system prune -a --volumesTo see what volumes exist before deleting:
docker volume lsWarning: Volumes may contain important data like database files. Back up critical volumes before pruning.
Docker BuildKit cache can grow significantly with frequent builds. Clear it with:
docker builder pruneTo remove ALL build cache (not just unused entries):
docker builder prune -aThis is especially useful in development environments with many iterative builds.
For targeted cleanup when you need finer control:
Remove stopped containers only:
docker container pruneRemove dangling images only:
docker image pruneRemove ALL unused images:
docker image prune -aList images sorted by size to identify large ones:
docker images --format "{{.Repository}}:{{.Tag}} {{.Size}}" | sort -k2 -hRemove specific images:
docker rmi <image_id_or_name>Remove unused networks:
docker network pruneContainer logs can grow unbounded and consume significant space. Configure log rotation in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}Then restart Docker:
sudo systemctl restart dockerTo immediately truncate logs for a running container:
# Find container ID
docker ps
# Truncate log (requires sudo)
sudo truncate -s 0 /var/lib/docker/containers/<container_id>/<container_id>-json.logFor new containers, you can also specify log options at runtime:
docker run --log-opt max-size=10m --log-opt max-file=3 <image>Confirm disk space has been recovered:
docker system df
df -h /var/lib/dockerThen retry your original command:
docker run <your_image>
# or
docker-compose up
# or
docker build -t <tag> .The operation should now complete successfully.
Docker Desktop (Mac/Windows): Docker Desktop uses a virtual disk that can be resized. Go to Settings > Resources > Advanced > Disk image size (or "Virtual disk limit" in newer versions) to increase capacity. The disk file is located at:
- macOS: ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
- Windows: %LOCALAPPDATA%\Docker\wsl\data\ext4.vhdx
Moving Docker's data directory: If /var/lib/docker is on a small partition, relocate it by editing /etc/docker/daemon.json:
{
"data-root": "/mnt/larger-disk/docker"
}Then migrate:
sudo systemctl stop docker
sudo rsync -aP /var/lib/docker/ /mnt/larger-disk/docker/
sudo systemctl start dockerAutomated cleanup in CI/CD: Add cleanup steps to prevent pipeline failures:
# GitHub Actions example
- name: Docker cleanup
run: docker system prune -af --volumes# GitLab CI example
after_script:
- docker system prune -af --volumesKubernetes/container orchestration: If running Docker inside Kubernetes pods or as part of a container orchestration platform, consider:
- Using ephemeral runners that start fresh
- Mounting the Docker socket from the host (be aware of security implications)
- Using kaniko or buildah for rootless, daemonless builds
Prevention strategies:
- Schedule regular prune jobs: 0 3 * * * /usr/bin/docker system prune -af
- Use --rm flag for temporary containers: docker run --rm ...
- Tag images specifically instead of relying on latest to avoid version accumulation
- Monitor /var/lib/docker usage with alerting (e.g., Prometheus node_exporter)
- Set up automatic image cleanup policies in your container registry
Inode exhaustion: Rarely, you may have disk space but no inodes. Check with df -ih. Docker's many layer files can exhaust inodes on some filesystems. Consider using XFS or increasing inode ratio when formatting.
unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker