This error occurs when Docker operations fail due to disk space limitations, inode exhaustion, or kernel keyring limits. The fix involves pruning unused Docker resources, increasing disk space or inodes, and adjusting kernel parameters.
The "Disk quota exceeded" error in Docker can stem from several different causes, making it one of the more confusing errors to troubleshoot. While the message suggests you've run out of disk space, the actual cause is often not related to disk space at all. The most common causes are: 1. **Kernel keyring exhaustion**: The Linux kernel maintains a keyring for session keys. Docker creates a unique session key for each container, and the default limit is relatively low. When this limit is reached, new containers cannot start and report "disk quota exceeded" even though the filesystem has plenty of space. 2. **Actual disk space exhaustion**: Docker images, containers, volumes, and build cache can consume significant disk space over time. Dangling images, stopped containers, and unused volumes accumulate and eventually fill the disk. 3. **Inode exhaustion**: On some filesystems (especially in VPS environments), you can run out of inodes before running out of disk space. Each file and directory consumes one inode, and Docker creates many small files. 4. **Storage driver quotas**: Some Docker storage backends (devicemapper, ZFS, btrfs) support per-container disk quotas that may be exceeded. Understanding which cause applies to your situation is the first step in resolving this error.
First, determine whether this is actually a disk space issue:
# Check disk space
df -h
# Check inode usage (often overlooked!)
df -iIf disk space shows plenty of room but inodes are at 100%, you've hit inode exhaustion. If both show available space, the issue is likely the kernel keyring limit.
Check Docker's own disk usage:
docker system dfThis shows space used by images, containers, volumes, and build cache.
If disk space and inodes look fine, the issue is likely the kernel keyring limit. This is especially common when running Docker inside LXC containers or after creating/destroying many containers.
Check current keyring usage:
cat /proc/sys/kernel/keys/maxkeys
cat /proc/sys/kernel/keys/maxbytesIncrease the limits temporarily:
# Run on the HOST (not inside a container)
echo 200000 | sudo tee /proc/sys/kernel/keys/maxkeys
echo 25000000 | sudo tee /proc/sys/kernel/keys/maxbytesMake the change permanent by adding to /etc/sysctl.conf:
sudo tee -a /etc/sysctl.conf << EOF
kernel.keys.maxkeys = 200000
kernel.keys.maxbytes = 25000000
EOF
sudo sysctl -pNote: If you're running Docker inside LXC/LXD, you must change these settings on the LXC host, not inside the container.
Clean up unused Docker objects to free disk space:
# Remove all stopped containers
docker container prune -f
# Remove unused images (not just dangling)
docker image prune -a -f
# Remove unused volumes (WARNING: may delete data!)
docker volume prune -f
# Remove build cache
docker builder prune -f
# Nuclear option: remove EVERYTHING unused
docker system prune -a --volumes -fTo see what would be removed before actually removing:
docker system prune -a --volumes # without -f, shows confirmationWarning: docker volume prune will delete data in unnamed volumes. Make sure you don't need any data before running it.
Container logs can grow very large and consume significant disk space:
# Find large log files
sudo find /var/lib/docker/containers -name "*.log" -size +100M
# Truncate a specific container's log
sudo truncate -s 0 /var/lib/docker/containers/<container_id>/<container_id>-json.logConfigure log rotation in your Docker daemon (/etc/docker/daemon.json):
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}Then restart Docker:
sudo systemctl restart dockerFor individual containers, set log limits at runtime:
docker run --log-opt max-size=10m --log-opt max-file=3 myimageIf df -i shows high inode usage, you need to find and remove files:
# Find directories with many files
sudo find /var/lib/docker -xdev -type d -exec sh -c 'echo "$(find "{}" -maxdepth 1 | wc -l) {}"' \; | sort -rn | head -20Common culprits:
- Old container layers in /var/lib/docker/overlay2
- Small temporary files created by applications
- Package manager caches inside images
For VPS environments, you may need to contact your provider to increase inode allocation or recreate the VPS with better inode settings.
As a workaround, move Docker's data directory to a partition with more inodes:
# Stop Docker
sudo systemctl stop docker
# Move Docker data
sudo mv /var/lib/docker /mnt/larger-partition/docker
sudo ln -s /mnt/larger-partition/docker /var/lib/docker
# Start Docker
sudo systemctl start dockerIf you need per-container disk quotas, configure them in the storage driver:
For overlay2 with XFS (requires project quota support):
# Mount XFS with project quota
sudo mount -o pquota /dev/sda1 /var/lib/docker
# Set default container size in daemon.json
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.size=10G"
]
}For devicemapper:
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.basesize=20G"
]
}At container runtime:
docker run --storage-opt size=5G myimageNote: Not all storage drivers support runtime size limits.
Prevent future disk quota issues by scheduling regular cleanup:
Using cron:
# Add to crontab (crontab -e)
0 2 * * * docker system prune -f --filter "until=168h" > /dev/null 2>&1This removes unused resources older than 7 days every night at 2 AM.
Using systemd timer:
# /etc/systemd/system/docker-cleanup.service
[Unit]
Description=Docker cleanup
[Service]
Type=oneshot
ExecStart=/usr/bin/docker system prune -f --filter "until=168h"
# /etc/systemd/system/docker-cleanup.timer
[Unit]
Description=Run Docker cleanup weekly
[Timer]
OnCalendar=weekly
Persistent=true
[Install]
WantedBy=timers.targetEnable with:
sudo systemctl enable --now docker-cleanup.timerMonitor disk usage: Set up alerts when Docker disk usage exceeds a threshold to catch issues before they cause failures.
Understanding the kernel keyring issue: The Linux kernel keyring system is used for managing cryptographic keys and other security credentials. Docker creates a unique session keyring for each container for security isolation. The default limit of keys (maxkeys) and their total size (maxbytes) can be quickly exhausted in environments with high container churn. This is why the error message says "disk quota exceeded" even when disk space is plentiful - it's a quota on the keyring, not the filesystem.
Docker inside LXC/LXD: Running Docker inside LXC containers requires special consideration. LXC containers share the host kernel, including the keyring. You must increase maxkeys on the LXC host, not inside the container. Some LXC configurations may also have their own disk quotas that need adjustment.
ZFS considerations: When using ZFS as Docker's storage backend, you can set quotas per dataset:
zfs set quota=50G rpool/docker
zfs set reservation=20G rpool/dockerBTRFS subvolume quotas: For BTRFS:
btrfs quota enable /var/lib/docker
btrfs qgroup limit 50G /var/lib/dockerDebugging disk usage in overlay2: The overlay2 storage driver can make it hard to see what's using space:
# See which images use the most space
docker images --format "{{.Size}}\t{{.Repository}}:{{.Tag}}" | sort -h
# Find large layers
du -sh /var/lib/docker/overlay2/* | sort -h | tail -20QNAP NAS specific: QNAP systems have their own quota system that can conflict with Docker. If you see quota errors on QNAP: 1) Enable quotas in Control Panel > Privileges > Quota, 2) Set quotas for users, then 3) Disable quotas again. This can reset stuck quota states.
Kubernetes and container runtimes: In Kubernetes environments, similar issues can occur. Check /proc/sys/kernel/keys/maxkeys on all nodes. For containerd or CRI-O, the same keyring issues apply since they use the same kernel mechanisms.
unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker