PostgreSQL detects corrupted data when checksums are enabled. Recover by using failover replicas, replaying WAL from backups, or zeroing damaged pages with appropriate settings.
This critical error occurs when PostgreSQL's data checksum verification detects that one or more data pages have been corrupted on disk. This indicates a serious problem with your database storage layer, potentially caused by hardware failures, bit rot, power loss during writes, or faulty storage system behavior. The error is only visible when data checksums are enabled during cluster initialization with the --data-checksums flag, which is now the default on most cloud providers and Linux distributions. Without checksums enabled, corruption may silently return bad data without alerting you to the problem.
Connect to the database and check if checksums are enabled:
SHOW data_checksums;If the result is off, your cluster was not initialized with checksums and you may already have undetected corruption. If on, you have corruption detection enabled and should investigate further.
If you have a PostgreSQL replica/follower, the corrupted page may not exist there (unless the replica was created from a bad base backup). Verify the replica is healthy:
# On replica, check replication lag
SELECT now() - pg_last_xact_replay_timestamp() AS replication_lag;
# Try the failing query on the replica
SELECT ... FROM <corrupted_table> LIMIT 1;If the replica is healthy and the query succeeds, fail over to the replica immediately.
Make a file-system-level copy of your data directory before attempting any repairs. This is crucial:
sudo systemctl stop postgresql
# Or for older systems:
sudo service postgresql stop
# Make a copy of the entire data directory
sudo cp -r /var/lib/postgresql/12/main /var/lib/postgresql/12/main.backupDo not use pg_dump for this—file-system copies preserve the ability to replay WAL.
If you maintain base backups and WAL archives:
# Restore to a point before corruption occurred
cp -r /path/to/base/backup /var/lib/postgresql/12/main
cp /path/to/wal/archives/* /var/lib/postgresql/12/main/pg_wal/
# Start the server
sudo systemctl start postgresqlTry the failing query. If successful, you've recovered from backup. If not, try an older base backup.
If recovery from backups is not an option and the corrupted data exists elsewhere, configure PostgreSQL to skip damaged pages:
# Edit postgresql.conf
sudo nano /etc/postgresql/12/main/postgresql.conf
# Add or modify these settings:
zero_damaged_pages = on
ignore_checksum_failure = onStart the server and run VACUUM to identify damaged pages:
sudo systemctl start postgresql
# Connect and vacuum the corrupted table
VACUUM FULL VERBOSE ANALYZE <corrupted_table>;
# Reindex the database
REINDEX DATABASE <database_name>;Note: This approach may return incorrect data from damaged pages before they are zeroed.
After recovery, check for additional corruption using pg_amcheck (PostgreSQL 13+):
# Create the amcheck extension if not present
sudo -u postgres psql -d <database> -c "CREATE EXTENSION IF NOT EXISTS amcheck;"
# Run corruption checks in parallel
pg_amcheck -d <database> --heapallindexed -j 4
# Or check a specific table
SELECT * FROM verify_heapam('<table_name>'::regclass);Any errors from amcheck indicate remaining corruption that needs investigation.
Data corruption is often hardware or configuration-specific. If corruption persists after recovery attempts, contact PostgreSQL professional support. Some advanced recovery options: (1) pg_hexedit can inspect low-level page structure for analysis, but only on copies. (2) pg_visibility verifies visibility map integrity (PostgreSQL 9.6+). (3) pg_checksums tool (PostgreSQL 12+) can detect block-level corruption without running the server. (4) Streaming replication with WAL archiving provides the best recovery path—corrupted data pages are rewritten correctly in WAL, so replaying WAL from an older base backup often recovers good data. (5) For locale-related corruption on replica failover, ensure source and target systems run compatible glibc versions.
vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
PANIC: could not write to file
How to fix PANIC: could not write to file in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL