PostgreSQL crash shutdown (57P02) occurs when the server crashes and terminates all client connections during recovery. The server automatically restarts and replays the write-ahead log (WAL) to restore data integrity.
A crash shutdown error occurs when the PostgreSQL server has unexpectedly terminated due to a backend process crash, an out-of-memory condition, a segmentation fault, or a disk I/O failure. When this happens, the postmaster (PostgreSQL main process) detects the crash and immediately terminates all existing client connections to prevent data corruption. The error message "57P02: terminating connection because of crash of another server process" is displayed to each connected client. This is actually a protective mechanism—PostgreSQL automatically initiates crash recovery on the next startup by replaying the write-ahead log (WAL) to restore the database to a consistent state.
Examine the PostgreSQL server logs to identify what caused the crash. Look for segmentation faults, out-of-memory messages, or disk errors.
# On Linux, typically:
tail -f /var/log/postgresql/postgresql.log
# Or in the data directory:
tail -f /var/lib/postgresql/[VERSION]/main/log/postgresql.logLook for messages like:
- "Segmentation fault"
- "Out of memory"
- "Connection lost unexpectedly"
- "Invalid page header"
Insufficient disk space is a common cause. Verify there is enough space for recovery and WAL replay.
df -h
# Check the partition where PostgreSQL data directory is located
# Check the size of the WAL files:
du -sh /var/lib/postgresql/[VERSION]/main/pg_wal/Ensure at least 20-30% of disk space is free before restarting.
By default, PostgreSQL is configured with restart_after_crash = on, which means it will automatically restart and perform crash recovery.
# Start/restart PostgreSQL (it will auto-recover):
sudo systemctl restart postgresql
# Or using pg_ctl:
sudo -u postgres pg_ctl restart -D /var/lib/postgresql/[VERSION]/mainWatch the logs as it replays the WAL:
tail -f /var/log/postgresql/postgresql.log | grep -i recoveryWait for the message: "database system is ready to accept connections"
Once recovery completes and the server is running, verify the integrity of critical tables.
sudo -u postgres psql
-- Check for corruption:
REINDEX DATABASE postgres;
-- Run integrity checks on specific tables:
CHECK TABLE your_table_name;
-- Monitor recovery progress:
SELECT pg_is_in_recovery();If any tables show corruption, restore from your backup.
Once the server is stable, identify the root cause to prevent future crashes.
For OOM crashes:
# Check memory usage and set appropriate limits:
free -h
# Adjust PostgreSQL shared_buffers if necessary (in postgresql.conf):
shared_buffers = 256MB # For 4GB RAM server
# Restart PostgreSQL for changes to take effect:
sudo systemctl restart postgresqlFor segmentation faults:
- Check PostgreSQL version: SELECT version();
- Update to the latest stable version if you're on an old version
- Report the crash dump to PostgreSQL developers if it's reproducible
For disk I/O failures:
- Run disk diagnostics: sudo smartctl -a /dev/sda
- Check filesystem integrity: sudo fsck /mount/point (unmount first)
- Replace faulty hardware if needed
By default, PostgreSQL is configured well for crash recovery. However, you can verify these settings:
sudo -u postgres psql
-- View recovery settings:
SHOW restart_after_crash;
SHOW wal_level;
SHOW max_wal_senders; -- For replicationRecommended settings in postgresql.conf:
# Ensure automatic recovery is enabled
restart_after_crash = on
# Use WAL archiving for faster recovery
wal_level = replica
archive_mode = on
archive_command = 'test ! -f /archive/%f && cp %p /archive/%f'Reload the configuration:
sudo -u postgres psql -c "SELECT pg_reload_conf();"Crash shutdown (57P02) is distinct from graceful shutdowns (57P01 - admin shutdown). The 57P02 error itself is not a problem—it's actually PostgreSQL's way of protecting data integrity. The real issue is what caused the crash in the first place.
Key recovery mechanisms:
- WAL Replay: PostgreSQL uses write-ahead logging to ensure durability. After a crash, it replays the WAL to recover committed transactions.
- Checkpoint: Periodic checkpoints create safe points in the database. Recovery only needs to replay changes after the last checkpoint.
- Data Page Verification: PostgreSQL can verify data page integrity during recovery using checksums (if enabled).
For production systems:
- Always maintain backups and test restore procedures regularly.
- Monitor disk space and system resources closely.
- Enable wal_level = replica and archive WAL files to another location for point-in-time recovery (PITR).
- Set up monitoring/alerting for OOM conditions, disk full errors, and log patterns indicating crashes.
- Use pg_basebackup for physical backups or logical backups with pg_dump for additional redundancy.
- Consider high availability solutions like Patroni or replication to a standby server.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL