PostgreSQL cannot write new data because the underlying filesystem has exhausted its storage capacity. Resolve this by freeing disk space, cleaning up temporary files, or increasing available storage.
This error occurs when PostgreSQL attempts to extend a data file but encounters an ENOSPC (no space) error from the operating system. The database cannot allocate new space for tables, indexes, or write-ahead logs (WAL). This is a critical condition that can lead to database corruption, failed transactions, and complete service unavailability if unresolved. Unlike application errors that allow some operations to continue, a full disk forces PostgreSQL into an unsafe state where even cleanup operations like VACUUM may fail.
First, verify that PostgreSQL is actually out of disk space and identify which filesystem is full:
# Check disk usage for entire filesystem
df -h
# Check PostgreSQL data directory size
du -sh /var/lib/pgsql/data
# Or for non-standard installations
du -sh $PGDATALook for a filesystem showing 100% or near-full usage. If the system disk (/dev/root) is full, you have OS-level problems.
Connect as a superuser and terminate idle connections. DO NOT kill long-running transactions abruptly:
-- See active connections
SELECT pid, usename, state, query FROM pg_stat_activity;
-- Terminate idle connections only
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle'
AND pid <> pg_backend_pid();This prevents new writes from consuming remaining space while you work.
The fastest way to recover disk space is removing log files. PostgreSQL logs can grow to gigabytes:
# Find and remove old log files (keep last 3 days for debugging)
find /var/lib/pgsql/data/pg_log -type f -mtime +3 -delete
# Or if using syslog, check system logs
find /var/log -name '*postgres*' -mtime +7 -delete
# Verify space recovery
df -hOnly delete logs older than the retention period you're comfortable with.
If queries crashed before completing, temporary files may linger:
# List temporary files
ls -lh /var/lib/pgsql/data/base/pgsql_tmp/
# Remove them (safe if PostgreSQL is idle)
rm -rf /var/lib/pgsql/data/base/pgsql_tmp/*
# Restart PostgreSQL to ensure clean state
sudo systemctl restart postgresqlTemporary files are safe to delete as they're only used during query execution.
Replication slots prevent WAL cleanup, causing pg_wal to fill disk. Identify problematic slots:
-- List all replication slots and their retention
SELECT slot_name, slot_type, restart_lsn, confirmed_flush_lsn
FROM pg_replication_slots;
-- Check which WAL files are retained
SELECT * FROM pg_ls_waldir()
ORDER BY modification DESC LIMIT 10;
-- Drop abandoned slots (replicas that won't reconnect)
SELECT pg_drop_replication_slot('slot_name');Only drop slots for standby servers you've permanently decommissioned.
After freeing disk space, vacuum tables to reclaim space from deleted rows:
-- Standard VACUUM (non-blocking, available for reads)
VACUUM ANALYZE;
-- For critical tables, use VACUUM FULL (exclusive lock, offline recommended)
VACUUM FULL my_large_table;Standard VACUUM only makes space available within the table. VACUUM FULL returns unused space to the OS but locks the table exclusively.
Temporary cleanup is not enough for sustainable operations:
For cloud environments (AWS RDS, Azure Database for PostgreSQL):
- Enable storage autoscaling in console
- Monitor FreeStorageSpace CloudWatch metrics
- Set up alerts at 20-30% full
For self-hosted PostgreSQL:
- Add a new volume if expansion is possible
- Move data or WAL to separate filesystem
- Use LVM to expand existing volume
Example: Add tablespace on new disk
CREATE TABLESPACE fast_disk LOCATION '/mnt/new_disk/pgdata';
ALTER TABLE large_table SET TABLESPACE fast_disk;This distributes load and prevents single disk exhaustion.
PostgreSQL 13+: max_slot_wal_keep_size - Limit WAL retention to prevent unbounded growth:
max_slot_wal_keep_size = 10GBSeparate filesystems are essential: Never store data and pg_wal on the same disk. If pg_wal fills, PostgreSQL panics and shuts down. Isolating it allows controlled degradation.
Autovacuum tuning - Default settings are conservative for small databases:
autovacuum_vacuum_scale_factor = 0.05 # 5% of table
autovacuum_analyze_scale_factor = 0.02
autovacuum_naptime = 1minWARNING: Never delete WAL files directly. Even one missing WAL segment corrupts your database permanently. Only use pg_drop_replication_slot() to allow proper WAL cleanup.
Docker on macOS: The Linux VM's disk can be full even if your Mac has free space. Docker Desktop Settings > Resources > Disk image size must be increased.
XFS filesystems: Rare but documented cases report ENOSPC errors with available space due to allocation group fragmentation. Use xfs_info to diagnose and consider rebalancing or ext4 alternatives.
vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
PANIC: could not write to file
How to fix PANIC: could not write to file in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL