This error occurs when Docker runs out of disk space during an image build. The Docker daemon cannot write temporary files or layer data because the partition hosting /var/lib/docker is full.
When Docker builds an image, it creates temporary files and stores layer data in /var/lib/docker. If this partition runs out of space, the build fails with "no space left on device." This is one of the most common Docker errors, especially on development machines and CI/CD servers where images and containers accumulate over time. Docker does not automatically clean up unused images, containers, volumes, or build cache, so disk usage grows until it hits the limit. The error can also occur if the filesystem runs out of inodes (even with free disk space) or if Docker is configured with disk quotas that have been exceeded.
First, identify how much space Docker is using and what's consuming it:
# Check disk space on all partitions
df -h
# Check Docker-specific disk usage
docker system df
# For detailed breakdown
docker system df -vThis shows space used by images, containers, volumes, and build cache. The output helps you decide what to clean up.
The quickest fix is to remove all unused Docker objects:
# Remove stopped containers, unused networks, dangling images, and build cache
docker system prune -f
# Also remove all unused images (not just dangling ones)
docker system prune -a -fNote: This does NOT remove volumes by default to prevent data loss. If you're sure you don't need orphaned volumes, add the --volumes flag.
Build cache can consume significant space. To clear it:
# Remove all build cache
docker builder prune -a -f
# Remove only cache older than 24 hours
docker builder prune --filter until=24h -fAfter clearing, run docker system df again to verify space was reclaimed.
If the standard prune didn't free enough space, target volumes specifically:
# List all volumes
docker volume ls
# Remove volumes not used by any container
docker volume prune -f
# Remove a specific volume
docker volume rm <volume_name>Warning: Only do this if you're certain the data in those volumes is not needed.
For more targeted cleanup:
# List all images with sizes
docker images -a
# Remove a specific image
docker rmi <image_id>
# Remove all exited containers
docker rm $(docker ps -aq -f status=exited)
# Remove all containers (careful!)
docker rm -f $(docker ps -aq)Use docker images --format "{{.Repository}}:{{.Tag}} {{.Size}}" to see image sizes and identify the largest ones.
Prevent future issues by reducing image size:
Use multi-stage builds:
# Build stage
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage - only copies what's needed
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]Use .dockerignore:
Create a .dockerignore file to exclude unnecessary files from the build context:
node_modules
.git
*.md
tests
.env*This reduces both build time and disk usage.
Sometimes the disk has free space but is out of inodes:
# Check inode usage
df -ihIf inodes are at 100% but disk space is available, you have many small files. This can happen with node_modules or cache directories. Clean up small files or move to a filesystem with more inodes.
If you frequently run out of space, move Docker's data directory:
# Stop Docker
sudo systemctl stop docker
# Move the data
sudo mv /var/lib/docker /path/to/new/location/docker
# Update Docker configuration
sudo nano /etc/docker/daemon.jsonAdd or update the data-root setting:
{
"data-root": "/path/to/new/location/docker"
}Then restart Docker:
sudo systemctl start dockerAlternatively, use a symlink:
sudo ln -s /path/to/new/location/docker /var/lib/dockerAutomated cleanup in CI/CD:
Add cleanup steps to your CI pipeline to prevent disk exhaustion:
# Example GitLab CI
after_script:
- docker system prune -fDocker Desktop (Windows/Mac):
Docker Desktop uses a virtual disk image that can grow but doesn't automatically shrink. To reclaim space:
- Mac: Remove and recreate the Docker.raw file (Settings > Resources > Advanced > Disk Image Location)
- Windows: Use "Reset to factory defaults" in Docker Desktop settings, or manually shrink the WSL2 virtual disk
Monitoring Docker disk usage:
Set up alerts before you run out of space:
# Simple script to check Docker disk usage
USAGE=$(docker system df --format '{{.Size}}' | head -1)
echo "Docker is using: $USAGE"Setting up automatic cleanup with cron:
# Add to crontab -e
0 0 * * * docker system prune -af --filter "until=168h"This removes unused resources older than 7 days, daily at midnight.
Overlay2 storage driver considerations:
The overlay2 storage driver (default on modern systems) is more efficient than aufs or devicemapper. Check your driver with:
docker info | grep "Storage Driver"For Kubernetes/container orchestration:
If running many containers, consider using local ephemeral storage limits and monitoring with Prometheus + cAdvisor.
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker
cannot open '/etc/passwd': Permission denied
How to fix 'cannot open: Permission denied' in Docker
Error response from daemon: failed to create the ipvlan port
How to fix 'failed to create the ipvlan port' in Docker
toomanyrequests: Rate exceeded for anonymous users
How to fix 'Rate exceeded for anonymous users' in Docker Hub