This error occurs when Docker cannot extract image layers during a pull operation because the disk where Docker stores its data has run out of space. The solution involves cleaning up unused Docker resources and ensuring sufficient disk space.
The "failed to extract layer: write /: no space left on device" error occurs when Docker is pulling an image from a registry and cannot extract one of the downloaded layers to disk. Docker images consist of multiple layers that are downloaded compressed and then extracted to the local filesystem. When Docker attempts to write the extracted layer data to /var/lib/docker (or Docker Desktop's virtual disk on Mac/Windows), the operation fails because no free space remains. This is distinct from general disk space errors because it specifically happens during the layer extraction phase of an image pull. Docker requires space for both the compressed download and the uncompressed extraction. A 500MB compressed layer might expand to 2GB when extracted, so even if you have enough space for the download, the extraction can still fail.
Start by understanding how Docker is using disk space:
docker system dfThis displays disk usage by category (Images, Containers, Local Volumes, Build Cache). For detailed breakdown:
docker system df -vAlso check system-level disk space:
df -h /var/lib/dockerOn Docker Desktop, check Settings > Resources to see the disk image size and usage.
Clear images that are not being used by any container:
# Remove dangling images (untagged, unreferenced)
docker image prune
# Remove ALL unused images (not just dangling)
docker image prune -aTo see what will be removed first:
docker images -f "dangling=true"
docker imagesNote: The -a flag removes all images not associated with a running container. You will need to re-pull them later.
Stopped containers consume disk space. Remove them:
# View stopped containers
docker ps -a --filter "status=exited"
# Remove all stopped containers
docker container pruneThis frees the writable layer space used by each stopped container.
Docker's build cache can grow substantially, especially with frequent builds:
# Remove dangling build cache
docker builder prune
# Remove ALL build cache
docker builder prune -aIn CI/CD environments, the build cache is often the largest consumer of disk space.
For a complete cleanup of all unused Docker resources:
# Remove containers, networks, images, and build cache
docker system prune -a
# Include volumes in cleanup (use with caution)
docker system prune -a --volumesWarning: The --volumes flag permanently deletes data in unused volumes. Only use this if you are certain no important data exists in Docker volumes.
Confirm disk space has been freed:
docker system df
df -h /var/lib/dockerThen retry the image pull:
docker pull <image-name>For large images, ensure you have at least 2-3x the compressed image size available for extraction overhead.
Docker Desktop disk limits: On Mac and Windows, Docker Desktop uses a virtual disk with a configurable size limit. Go to Settings > Resources > Advanced > Disk image size to increase it. On Mac, the Docker.raw or Docker.qcow2 file in ~/Library/Containers/com.docker.docker/Data/vms/0/data stores all Docker data.
Moving Docker's data directory: If /var/lib/docker is on a small root partition, relocate Docker's storage:
sudo systemctl stop docker
sudo mv /var/lib/docker /new/location/dockerEdit /etc/docker/daemon.json:
{
"data-root": "/new/location/docker"
}Then restart: sudo systemctl start docker
Checking inodes: Disk space exhaustion can also be caused by running out of inodes, not bytes:
df -ih /var/lib/dockerIf IUse% is high, you have an inode problem rather than a space problem.
Thin pool issues (devicemapper): On older RHEL/CentOS systems using devicemapper, the thin pool may be exhausted. Check with docker info and look for Data Space and Metadata Space. Consider migrating to the overlay2 storage driver.
CI/CD considerations: In CI pipelines, always run docker system prune -af at the start or end of jobs. Consider using --filter "until=24h" to only remove resources older than 24 hours.
Preventing recurrence:
- Schedule regular cleanup: 0 0 * * 0 docker system prune -af in cron
- Use specific image tags instead of :latest
- Run containers with --rm flag for automatic cleanup
- Monitor disk usage with alerts at 70% and 85% thresholds
unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker