This error occurs during database backup when PostgreSQL terminates the pg_dump connection prematurely. Common causes include idle transaction timeouts, network interruptions, statement timeouts, and firewall issues. Check server logs and adjust timeout parameters to resolve.
The pg_dump utility opens a connection to PostgreSQL and executes queries to read the database schema and data. The 'server closed the connection' error means PostgreSQL abruptly terminated that connection before pg_dump finished reading the backup data. This can happen mid-dump or at specific points depending on the underlying cause. The error is usually preceded by server-side events like timeout expiration, insufficient resources, or explicit connection termination. Unlike permanent schema/data corruption, this is a connectivity or resource issue that often resolves when the root cause is addressed.
Examine the PostgreSQL server logs to understand why the connection was closed. The server logs are typically located at:
# Linux/macOS
sudo tail -f /var/log/postgresql/postgresql.log
# Or query from psql (if accessible)
SELECT pg_current_logfile();
SELECT * FROM pg_read_file('/path/to/log/file');Look for messages like:
- "terminating connection due to idle-in-transaction timeout"
- "canceling statement due to statement timeout"
- "could not send data to client: Connection reset by peer"
- "could not send data to client: Connection timed out"
- "terminating connection due to administrator command"
These messages indicate which limit was exceeded.
If server logs show 'idle-in-transaction timeout', the idle_in_transaction_session_timeout parameter is terminating idle backup sessions. This commonly happens with parallel pg_dump (-j flag) when worker sessions wait idle while another worker is processing a large table.
Disable the timeout for the backup user:
-- Option 1: Disable for specific user (most secure)
ALTER USER backup_user SET idle_in_transaction_session_timeout = '0';
-- Option 2: Disable at database level
ALTER DATABASE your_database SET idle_in_transaction_session_timeout = '0';
-- Verify the setting
SHOW idle_in_transaction_session_timeout;Or set it temporarily via environment variable when running pg_dump:
PGOPTIONS="-c idle_in_transaction_session_timeout=0" pg_dump -h localhost -U backup_user -d your_database > backup.sqlIf the server logs show 'canceling statement due to statement timeout', adjust the statement_timeout parameter:
-- Option 1: Disable statement timeout for the backup user
ALTER USER backup_user SET statement_timeout = '0';
-- Option 2: Disable at database level
ALTER DATABASE your_database SET statement_timeout = '0';
-- Option 3: Increase to a large value (e.g., 1 hour)
ALTER USER backup_user SET statement_timeout = '3600000';Or set it via environment variable:
PGOPTIONS="-c statement_timeout=0" pg_dump -h localhost -U backup_user -d your_database > backup.sqlNote: statement_timeout = '0' disables the timeout entirely, suitable for administrative tasks like backups.
The --lock-wait-timeout option tells pg_dump to fail fast if it cannot acquire table locks, rather than waiting indefinitely:
pg_dump -h localhost -U backup_user -d your_database --lock-wait-timeout=10s > backup.sqlThis is helpful if pg_dump is being blocked by long-running transactions. Acceptable formats for the timeout are:
- "10s" (seconds)
- "1m" (minute)
- "2min" (minutes)
If pg_dump still fails due to locks, you may need to identify and terminate blocking queries:
SELECT pid, usename, query, state FROM pg_stat_activity WHERE state != 'idle';
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE query ILIKE '%some_long_query%';Parallel pg_dump (using -j flag) opens multiple worker connections that may idle while waiting for table locks. If you're seeing timeout errors with parallel dumps, try sequential mode:
# Bad: Parallel dump (more prone to timeouts)
pg_dump -h localhost -U backup_user -d your_database -j 4 > backup.sql
# Good: Sequential dump (less prone to timeouts)
pg_dump -h localhost -U backup_user -d your_database > backup.sqlSequential dumps are slower but more reliable on systems with aggressive timeout settings. For very large databases, consider:
- Using pg_dump --format=custom which is faster than plain SQL
- Dumping individual schemas or tables instead of the entire database
- Scheduling dumps during low-activity periods to reduce lock contention
If server logs show 'could not send data to client: Connection reset by peer' or 'Connection timed out', the issue is network-related:
# Test TCP connectivity to PostgreSQL
telnet localhost 5432
# Or using nc (netcat)
nc -zv localhost 5432
# If dumping from remote machine, verify hostname/port
ping <postgres_host>
telnet <postgres_host> 5432Network issues to investigate:
- Firewall blocks idle connections after timeout period (common in cloud environments)
- Network switch or gateway has aggressive connection timeout
- NAT router resets long-lived connections
- ISP or corporate proxy drops idle TCP streams
To help network devices stay alive, configure TCP keep-alive:
pg_dump -h localhost -U backup_user -d your_database -v --no-reconnect > backup.sqlResource exhaustion (memory, file descriptors, connections) can force connection termination. Check server health:
# Check available memory and file descriptors
free -h
ulimit -n
# On Linux, check PostgreSQL process resource limits
cat /proc/$(pidof postgres)/limits
# Check current connection count
psql -c "SELECT count(*) as connections FROM pg_stat_activity;"
psql -c "SHOW max_connections;"If nearing connection limits:
-- Increase max_connections (requires PostgreSQL restart)
ALTER SYSTEM SET max_connections = 500;
-- Then restart PostgreSQL
-- Or use a connection pooler like pgBouncer to reduce backend connectionsFor memory issues, check if PostgreSQL is swapping:
ps aux | grep postgres
free -h
swapoff -sUse -v (verbose) or -vv (extra verbose) to see exactly which table is being dumped when the error occurs:
pg_dump -h localhost -U backup_user -d your_database -vv > backup.sql 2>dump.log
cat dump.logVerbose output shows:
- "dumping table schema for TABLE_NAME"
- "dumping contents of table TABLE_NAME"
- Exact point where connection was lost
If a specific table consistently causes the error, that table may be:
- Very large (exceeds statement_timeout)
- Locked by another process
- Corrupted (uncommon but possible)
You can dump that problematic table separately or exclude it:
# Dump all except problematic_table
pg_dump -h localhost -U backup_user -d your_database -T problematic_table > backup.sql
# Dump only specific table
pg_dump -h localhost -U backup_user -d your_database -t specific_table > table.sqlThe plain SQL text format (-F p, default) streams output as raw SQL text. If network issues occur, try the custom binary format which is more resilient:
# Plain text format (default, most prone to connection issues)
pg_dump -h localhost -U backup_user -d your_database -F p > backup.sql
# Custom format (binary, more robust, smaller file size)
pg_dump -h localhost -U backup_user -d your_database -F c > backup.custom
# Directory format (allows parallel restore)
pg_dump -h localhost -U backup_user -d your_database -F d -f ./backup_dirCustom format benefits:
- More efficient binary encoding reduces data transfer
- Supports compression with -Z flag
- More resistant to network timeouts
- Restores individual objects selectively
Restore custom format dumps:
pg_restore -d target_database backup.customParallel pg_dump and Idle Transactions: Parallel pg_dump uses multiple worker connections coordinated by a supervisor process. While one worker dumps a large table, others remain idle in transaction waiting for that table lock. If idle_in_transaction_session_timeout is set (common for production safety), these idle workers will be terminated, breaking the backup. Always set idle_in_transaction_session_timeout to 0 for backup users. SSL Renegotiation Issues: Older PostgreSQL versions with certain SSL libraries may renegotiate SSL keys during long operations. If logs show SSL errors, try setting ssl_renegotiation_limit = 0 in postgresql.conf. Custom Binary Format Advantages: The custom format (-F c) uses a binary protocol that's more space-efficient and allows pg_restore to selectively restore specific tables or schemas. It also supports compression (-Z) and parallel restores (-j). For large databases, prefer custom format. Connection Poolers: If using PgBouncer or Pgpool, verify pooler settings don't have aggressive timeouts. Poolers may reset idle connections differently than direct connections. Consider routing backups directly to PostgreSQL, bypassing the pooler. Remote Backups: Network timeouts are more common on remote backups across the internet. Use SSH tunneling to reduce latency and improve reliability: pg_dump -h localhost -U backup_user -d your_database | ssh remote_server 'cat > backup.sql'. Monitoring for Patterns: Log failed backups with timestamps and compare against pg_stat_statements or slow query logs to identify correlations. Some errors may correlate with high database load, specific tables, or time of day.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL