The "No Space Left on Device" error occurs when Terraform runs out of disk space during initialization, planning, or apply operations. This typically happens in CI/CD environments or when provider caches grow too large. Diagnose disk usage and implement caching strategies to resolve.
This error occurs when Terraform attempts to write files (provider plugins, state, or temporary files) but the filesystem has zero available space. Unlike some errors that may have misleading messages, this one is usually literal - the disk is genuinely full. The error can occur at various stages: - During "terraform init" when downloading and extracting provider plugins - During "terraform plan" when creating temporary files - During "terraform apply" when writing state files - In CI/CD when containers or build agents have limited storage
Identify the bottleneck:
# Check disk usage
df -h
# Check inode usage (can be full even with space left)
df -i
# Find what's consuming space
du -sh ~/ /tmp/ /var/tmp/ 2>/dev/null
du -sh .terraform/Look for filesystems at 100% capacity.
Remove provider cache to free space:
# In your Terraform working directory
rm -rf .terraform/
# Or just remove provider plugins
rm -rf .terraform/providers/
# For multiple workspaces
find . -name ".terraform" -type d -exec rm -rf {} +You'll need to run terraform init again to re-download providers.
Set up shared provider cache for all projects:
# Create cache directory (use a location with more space)
mkdir -p ~/.terraform.d/plugin-cache
# Set environment variable
export TF_PLUGIN_CACHE_DIR="$HOME/.terraform.d/plugin-cache"
# Make persistent in ~/.bashrc or ~/.zshrc
echo 'export TF_PLUGIN_CACHE_DIR="$HOME/.terraform.d/plugin-cache"' >> ~/.bashrcOr configure in ~/.terraformrc:
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"This shares provider binaries across all projects instead of duplicating them.
Add cleanup steps to your CI pipeline:
# GitHub Actions
- name: Clean Terraform cache
run: |
rm -rf ~/.terraform.d/plugin-cache
rm -rf .terraform/
df -h
- name: Terraform Init
run: terraform init
# GitLab CI
script:
- rm -rf ~/.terraform.d/plugin-cache .terraform/
- df -h
- terraform init
- terraform planRun this before every terraform init to ensure a clean state.
Configure larger storage in your CI system:
# GitHub Actions - use larger runner
runs-on: ubuntu-latest-xl # More disk than ubuntu-latest
# GitLab CI
resources:
disk: 100Gi
# Docker - increase container disk
docker run --storage-opt size=100G
# Kubernetes - increase PVC
resources:
requests:
storage: 100GiIf /tmp is full, redirect to alternate location:
# Check /tmp usage
du -sh /tmp
# Use location with more space
export TMPDIR="/path/with/more/space"
export TMP="$TMPDIR"
# Run terraform
terraform init
terraform planThis prevents Terraform from filling /tmp during operations.
Reduce filesystem writes in minimal environments:
# Disable lock file (only if not using state locking)
terraform init -lock=false
# Or in terraform config
terraform {
backend "s3" {
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
}
}Note: Only disable locking if you have single-user access.
For Terraform Enterprise users: when mounted disk operational mode is enabled, high inode usage can appear as disk-full errors even with space available (check with df -i). PostgreSQL shared memory issues ("No space left on device" during Postgres operations) can be resolved by increasing Docker shared memory size (-v /dev/shm:/dev/shm:rw,size=2g) and tuning shared_buffers and max_locks_per_transaction PostgreSQL parameters. In large monorepos with many workspaces, using Terragrunt with provider caching can further reduce disk overhead. Consider using provider mirrors or Terraform registry mirrors in air-gapped environments to cache providers centrally. Provider plugins can be 500MB+ uncompressed; plan accordingly for ephemeral CI environments.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform