pg_dump fails with a lock timeout when it cannot acquire AccessShareLock on tables because other transactions hold conflicting locks. This typically happens when active DDL operations or long-running transactions block the backup process.
pg_dump acquires an AccessShareLock on each table during the backup startup phase to ensure no one drops tables while the dump is in progress. If another transaction holds an exclusive lock (from operations like ALTER TABLE, DROP TABLE, TRUNCATE, or REINDEX), or if there are uncommitted transactions, pg_dump will block waiting for the lock. If the wait exceeds the lock timeout (default is indefinite), pg_dump fails with this error. The error can also occur in managed services like AWS RDS when the dump user lacks proper permissions on tables owned by other users.
Connect to the database and identify what is holding locks:
SELECT pid, usename, query, state, query_start, wait_event_type
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start;This shows all active queries. Look for long-running transactions (old query_start times) or DDL operations that might be blocking the dump.
To see explicit locks:
SELECT database, relation::regclass, mode, granted
FROM pg_locks
WHERE NOT granted
ORDER BY relation;Use the --lock-wait-timeout option to prevent indefinite waiting:
pg_dump --lock-wait-timeout=60000 dbname > backup.sqlThe timeout is specified in milliseconds. The example above waits up to 60 seconds before failing. This prevents pg_dump from blocking production traffic for hours while waiting for locks.
Alternatively, use a shorter timeout to fail faster:
pg_dump --lock-wait-timeout=10000 dbname > backup.sqlIf you identify a blocking process, you can safely terminate it (use caution in production):
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE pid = 12345; -- Replace with the PID from step 1Verify the query is actually finished before retrying pg_dump:
pg_dump --lock-wait-timeout=60000 dbname > backup.sqlRun pg_dump when minimal database activity is occurring. Check your application logs to identify quiet periods (early mornings, weekends, etc.).
For recurring backups, add to cron:
# Run backup daily at 2 AM
0 2 * * * /usr/bin/pg_dump --lock-wait-timeout=60000 dbname > /backups/db-$(date +\%Y\%m\%d).sqlIf you're dumping a database with many tables and getting lock exhaustion errors, increase the setting:
-- Show current value
SHOW max_locks_per_transaction;
-- Set in postgresql.conf
max_locks_per_transaction = 256
-- Or set per-session
SET max_locks_per_transaction = 256;Note: Increasing this requires a server restart when set in postgresql.conf. The default is 64, but large databases may need 256 or more.
On AWS RDS, if the error includes 'permission denied', the dump user lacks access to tables owned by other users:
Have the table owner grant permissions:
GRANT CONNECT ON DATABASE mydb TO rds_master_user;
GRANT USAGE ON SCHEMA public TO rds_master_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO rds_master_user;
-- Make it default for future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO rds_master_user;Then retry pg_dump as the master user.
Lock Modes: pg_dump uses AccessShareLock (not the default exclusive lock), which is compatible with SELECT, INSERT, UPDATE, DELETE, and most operations. It only conflicts with DDL operations (DROP TABLE, TRUNCATE, ALTER TABLE, etc.) that require AccessExclusiveLock.
Parallel Dumps: When using pg_dump -j N for parallel dumps, the leader process acquires shared locks on all tables before workers begin. If another process requests an exclusive lock during parallel dump, it creates a deadlock situation, and the dump may fail with a NOWAIT error.
RDS Specifics: On AWS RDS, the rds_superuser role is not a true superuser and cannot access tables owned by other users unless permissions are explicitly granted. This is fixed in PostgreSQL 10+.
Snapshot Isolation: For PostgreSQL 9.4+, you can use explicit snapshot isolation:
pg_dump --snapshot='"00000003-FFFFFFFF:2"' dbname > backup.sqlCreate the snapshot with another connection to ensure consistency.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL