PostgreSQL PANIC: could not write to file occurs when the database runs out of disk space or encounters I/O errors while writing critical data. This is a severe issue that stops the server and requires immediate action to resolve.
This PANIC-level error indicates that PostgreSQL encountered a critical failure while attempting to write essential data to disk. This typically involves Write-Ahead Logging (WAL) files or database files and usually means the disk is full, permissions are denied, or the filesystem has I/O issues. Unlike regular errors, PANICs cause PostgreSQL to crash immediately and the database cannot accept new connections until the issue is resolved.
Run the df command to see available disk space on all mounted filesystems:
df -hLook for the partition where PostgreSQL data is stored (usually /var or /home). If it shows 100% full or very close (>95%), this is the cause.
Check specifically the PostgreSQL data directory:
du -sh /var/lib/postgresql
du -sh /var/lib/postgresql/15/main/pg_walIf pg_wal is consuming excessive space, this indicates WAL files are not being archived or recycled properly.
If disk space is the issue, you need to free space WITHOUT deleting PostgreSQL files.
First, identify large files outside PostgreSQL:
# Find large files in common locations
find /var/log -type f -size +100M -exec ls -lh {} \;
find /tmp -type f -size +100M -exec ls -lh {} \;
find /home -type f -size +1G -exec ls -lh {} \;Delete unnecessary logs and temporary files:
# Clear old PostgreSQL logs (if not archiving them)
rm -f /var/log/postgresql/postgresql-*.log.*
# Clear system logs if needed
sudo journalctl --vacuum=10d
# Clear temp files
rm -rf /tmp/*NEVER delete files from /var/lib/postgresql or pg_wal directory directly, as this can corrupt the database.
A common practice is to create a large dummy file that can be deleted quickly in emergencies to gain space:
# Create a 500MB dummy file in the pg_wal directory
cd /var/lib/postgresql/15/main
sudo -u postgres fallocate -l 500M EMERGENCY_DELETE_ME
# Or if fallocate is not available:
sudo -u postgres dd if=/dev/zero of=EMERGENCY_DELETE_ME bs=1M count=500Then, if you hit disk full again in the future, you can quickly delete it:
sudo rm /var/lib/postgresql/15/main/EMERGENCY_DELETE_MEOnce you've freed disk space, try restarting PostgreSQL:
sudo systemctl restart postgresqlMonitor the startup process:
sudo journalctl -u postgresql -fLook for messages like "database system was not properly shut down; automatic recovery in progress" followed eventually by "database system is ready to accept connections".
If the restart succeeds, verify the database is accepting connections:
psql -U postgres -c "SELECT version();"If you're using WAL archival (archive_mode = on in postgresql.conf), the archival process may be failing, preventing WAL cleanup:
# Check PostgreSQL configuration
sudo -u postgres grep -i archive /etc/postgresql/15/main/postgresql.confIf archival is enabled, verify the archive command works:
# Example: if archiving to another directory
mkdir -p /archive_location
sudo chown postgres:postgres /archive_location
sudo chmod 700 /archive_locationUpdate postgresql.conf if the archive destination doesn't exist:
sudo nano /etc/postgresql/15/main/postgresql.conf
# Find these lines and verify they're correct:
archive_mode = on
archive_command = 'test ! -f /archive_location/%f && cp %p /archive_location/%f'After changes, reload the configuration:
sudo systemctl reload postgresqlSet up monitoring to catch disk space issues before they cause problems:
# Create a cron job to monitor disk space
sudo crontab -e
# Add this line to check hourly:
0 * * * * df -h | grep -E '9[0-9]%|100%' && mail -s "PostgreSQL disk alert" [email protected]Adjust work_mem if you have large queries:
sudo nano /etc/postgresql/15/main/postgresql.conf
# Increase this value to reduce temporary file creation
# (but ensure sufficient RAM for your workload)
work_mem = 256MB # Default is 4MBConsider configuring temp_file_limit to prevent runaway temporary files:
temp_file_limit = 5GB # Limit total temporary file spaceReload to apply changes:
sudo systemctl reload postgresqlWAL Maintenance: Write-Ahead Logs are essential for crash recovery and replication. PostgreSQL automatically removes old WAL files when they're no longer needed. If archive_mode is ON, files won't be removed until the archive command succeeds. Check for failed archive commands in the PostgreSQL logs.
Recovery Process: After freeing space and restarting, PostgreSQL performs automatic crash recovery by replaying the WAL log. This can take significant time for large databases and is normal behavior. Monitor pg_controldata to verify recovery status.
Permanent Solutions:
- Increase storage capacity (add disk or resize partition)
- Implement log rotation for PostgreSQL logs
- Regularly VACUUM FULL tables to reclaim space
- Monitor table bloat and regularly clean up old data
- Consider moving WAL to a dedicated, larger partition for high-volume databases
Data Safety: Never manually delete pg_wal files or database files even when the disk is full. If the database won't start, try moving the entire pg_wal directory to another partition with space, then create a symlink.
vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
Assert failure
How to fix "Assert failure" in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL