PostgreSQL enters read-only mode when transaction ID (XID) wraparound is imminent to prevent data corruption. This happens when autovacuum falls behind freezing old transactions, and you must run VACUUM immediately to restore write access before data loss occurs.
PostgreSQL uses 32-bit transaction IDs (XIDs) that count from 0 to 4.29 billion. After the counter wraps around, very old tuples suddenly appear to have "future" XIDs and become invisible to all transactions. To prevent this data corruption, PostgreSQL has a safety mechanism: when the database approaches 2 billion unfrozen transactions (the critical threshold), it stops accepting any write commands (INSERT, UPDATE, DELETE, CREATE) and only allows reads and VACUUM. The error message "database is not accepting commands to avoid wraparound data loss" is this protective mechanism in action. Without it, your data would be corrupted silently. The "to avoid wraparound" part refers to transaction ID wraparoundβthe point at which the XID counter loops back to 3 instead of continuing to 4.29 billion.
Run this query to see how close you are to the wraparound limit (2 billion):
SELECT datname,
age(datfrozenxid) AS frozen_xid_age,
CASE
WHEN age(datfrozenxid) > 1900000000 THEN 'CRITICAL'
WHEN age(datfrozenxid) > 1500000000 THEN 'HIGH'
WHEN age(datfrozenxid) > 200000000 THEN 'MEDIUM'
ELSE 'LOW'
END AS risk_level
FROM pg_database
WHERE datname = current_database();If the frozen_xid_age is above 1.8 billion, you are in the danger zone. The database will reject writes when it hits about 2.1 billion.
Long transactions block autovacuum from advancing the freeze boundary. List active transactions:
SELECT pid, usename, application_name, state, query_start,
EXTRACT(epoch FROM (now() - query_start)) AS seconds_running
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start ASC;If you see very old transactions (hours or days old), close them gracefully:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE pid <> pg_backend_pid() -- Don't kill yourself
AND query_start < now() - interval '1 hour'; -- Older than 1 hourAlternatively, restart the application that is holding the connection.
This is the critical step. Run a VACUUM (not VACUUM FULL) on the entire database. For PostgreSQL 14+, use INDEX_CLEANUP OFF to speed it up:
VACUUM (INDEX_CLEANUP OFF, TRUNCATE OFF);For PostgreSQL 13 and earlier:
VACUUM;This process will:
- Freeze old transaction IDs so they are visible to all future transactions
- Reclaim disk space from deleted rows
- Update statistics for the query planner
WARNING: Do NOT use VACUUM FULL in wraparound mode; it requires a new XID and will make things worse. Also avoid VACUUM FREEZE as it does more work than necessary.
Monitor progress in another session:
SELECT query, query_start, EXTRACT(epoch FROM (now() - query_start)) AS seconds_running
FROM pg_stat_activity
WHERE query LIKE 'VACUUM%';On large databases, this can take minutes to hours. Do not interrupt it.
Once VACUUM completes, verify that the XID age has been reset:
SELECT datname, age(datfrozenxid) AS frozen_xid_age
FROM pg_database
WHERE datname = current_database();The frozen_xid_age should now be a small number (minutes, not millions). Once this completes, the database will accept write commands again automatically.
Ensure autovacuum is enabled:
SHOW autovacuum; -- Should return 'on'If autovacuum is off, turn it back on:
ALTER SYSTEM SET autovacuum = on;
SELECT pg_reload_conf();Then review autovacuum settings to prevent recurrence:
SHOW autovacuum_freeze_max_age; -- Default 200 million (reasonable for most)
SHOW autovacuum_vacuum_threshold; -- Number of updates/deletes before autovacuum triggers
SHOW autovacuum_naptime; -- How often autovacuum checks (default 10 seconds)If you have heavy write workloads, consider lowering autovacuum_freeze_max_age to 100-150 million transactions so autovacuum triggers more aggressively.
Check autovacuum logs to understand what happened:
SELECT schemaname, relname, n_live_tup, n_dead_tup,
ROUND(n_dead_tup * 100.0 / n_live_tup, 1) AS dead_percent,
last_vacuum, last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 10;Tables with high dead tuple ratios mean autovacuum is not keeping up. Common causes:
- Autovacuum workers are being cancelled by long queries
- autovacuum_max_workers is too low for your workload
- autovacuum_naptime is too high
- Query patterns produce many short-lived rows that accumulate
Adjust autovacuum settings in postgresql.conf or via ALTER SYSTEM:
ALTER SYSTEM SET autovacuum_max_workers = 4; -- Increase from default 3
ALTER SYSTEM SET autovacuum_naptime = '5s'; -- Check every 5 seconds instead of 10
SELECT pg_reload_conf();Monitor pg_stat_activity for "(to prevent wraparound)" autovacuum jobs to ensure they complete without interruption.
Transaction ID wraparound is a fundamental limitation of PostgreSQL's 32-bit XID design. While newer PostgreSQL versions (14+) have safeguards like failsafe autovacuum that ignore resource constraints, the best prevention is proper autovacuum tuning and monitoring. Watch the metric "age(datfrozenxid)" via monitoring tools; set alerts at 500 million to give yourself time to act before the database goes read-only. For cloud databases (AWS RDS, Google Cloud SQL, Azure), check if your platform offers emergency maintenance mode that temporarily grants extra transaction IDs. In multi-tenant environments, consider per-schema or per-table vacuuming strategies to avoid blocking cleanup across entire instances. If wraparound recurs frequently, investigate whether table bloat is accumulating (dead tuples piling up faster than autovacuum can clean), and consider increasing autovacuum_vacuum_scale_factor or adding more autovacuum workers.
vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
PANIC: could not write to file
How to fix PANIC: could not write to file in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL