PostgreSQL encounters a disk full error when the file system runs out of available space. This prevents writes and can cause database shutdown. Immediate action is needed to free space or expand storage.
A "Disk full" error (ENOSPC - Error No Space) occurs when PostgreSQL attempts to write data but the underlying file system has no available disk space. This can happen during data writes, WAL (Write-Ahead Log) operations, or temporary file creation. PostgreSQL's response depends on where the error occurs: data writes may abort the current transaction and allow read-only access, but WAL errors trigger an immediate PANIC and database shutdown since transaction logging is critical.
Connect to your PostgreSQL server and check which filesystem is full:
df -hIdentify the partition holding the PostgreSQL data directory (typically /var/lib/postgresql or your custom PGDATA). Then check PostgreSQL directory sizes:
du -sh /var/lib/postgresql/[version]/main
du -sh /var/lib/postgresql/[version]/main/pg_walIf pg_wal is consuming most space, your archiving or replication is likely broken.
Halt any ongoing backup or bulk operations that might be generating excessive WAL:
# Check WAL file count
ls -la /var/lib/postgresql/[version]/main/pg_wal/ | wc -lIf you see thousands of WAL files not being cleaned up, your archive_command or replication is failing. Check PostgreSQL logs:
tail -100 /var/log/postgresql/postgresql.logLook for archive_command errors or replication slot issues.
Choose appropriate actions based on your situation:
Option A: Remove old transaction logs (if archiving is working):
# On a standby or after verifying archives are safe
rm /var/lib/postgresql/[version]/main/pg_wal/0000*Option B: Vacuum and reclaim space from deleted rows:
psql -U postgres -d your_database -c "VACUUM ANALYZE;"For aggressive reclamation (requires 2x current table space):
psql -U postgres -d your_database -c "VACUUM FULL;"Option C: Drop or truncate unused tables:
psql -U postgres -d your_database -c "DROP TABLE unused_table;"Option D: Use pg_repack for online reclamation (requires installation):
pg_repack -U postgres -d your_database -k -vIf the database is legitimately large, expand storage:
Cloud environments (AWS, Azure, GCP):
- Stop the PostgreSQL service
- Expand the EBS/managed disk volume through your cloud console
- Expand the filesystem (e.g., sudo resize2fs /dev/xxx for ext4)
- Restart PostgreSQL
On-premises:
- Add a new disk or replace with larger capacity
- Create new partition and extend the filesystem
- Or use tablespaces to move database files to another disk:
# Create tablespace on new disk
sudo mkdir -p /mnt/postgres_data/new_tablespace
sudo chown postgres:postgres /mnt/postgres_data/new_tablespace
psql -U postgres -c "CREATE TABLESPACE new_space LOCATION '/mnt/postgres_data/new_tablespace';";
# Move tables or indexes
psql -U postgres -d your_database -c "ALTER TABLE large_table SET TABLESPACE new_space;";Prevent future disk full incidents:
Check archive_command:
SHOW archive_command;If it's failing, verify the archive destination is accessible and has space:
ls -la /path/to/archive/
df -h /path/to/archive/Fix replication slots:
-- List slots
SELECT slot_name, restart_lsn FROM pg_replication_slots;
-- Drop abandoned slots
SELECT pg_drop_replication_slot('slot_name');Set up WAL limits (PostgreSQL 13+):
ALTER SYSTEM SET max_slot_wal_keep_size = '10GB';
SELECT pg_reload_conf();Confirm everything is working:
psql -U postgres -c "SELECT datname, pg_database_size(datname) FROM pg_database ORDER BY pg_database_size DESC;"Ensure PostgreSQL is accepting writes:
psql -U postgres -d your_database -c "CREATE TABLE test_write (id int); DROP TABLE test_write;"Set up proactive monitoring:
-- Create a monitoring view
CREATE VIEW disk_usage AS
SELECT
datname,
pg_size_pretty(pg_database_size(datname)) as size,
ROUND((pg_database_size(datname)::float / 1099511627776 * 100), 2) as percent_of_tb
FROM pg_database
ORDER BY pg_database_size(datname) DESC;
SELECT * FROM disk_usage;Enable log shipping or use Prometheus/pganalyze to alert when disk usage exceeds 70% or 80%.
PostgreSQL's ENOSPC handling varies by context. In smgrextend() (data writes), it aborts only the current transaction, allowing recovery. In smgrwrite() (after commit), it's dangerous because dirty data may be lost. In WAL writes, it causes immediate PANIC and shutdown. For this reason, isolate pg_wal on a separate filesystem with its own quota monitoring. Use max_slot_wal_keep_size (PostgreSQL 13+) to prevent replication slots from consuming all disk. VACUUM FULL requires 2x current table size temporarily. Consider pg_repack for online space reclamation. Some filesystems (ext4, XFS) reserve space optimistically, but can still fail if the OS reports ENOSPC. Always maintain spare capacity and monitor trends, not just absolute usage.
vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
PANIC: could not write to file
How to fix PANIC: could not write to file in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL