The pg_dump utility lost its database connection mid-backup, usually due to network issues, server restarts, or timeout settings. Check server status, verify network connectivity, and adjust timeout parameters.
This error occurs when pg_dump establishes a connection to the PostgreSQL server but the connection is dropped during the backup operation. Unlike connection failures at startup, this error indicates the server was initially reachable but became unavailable or unresponsive while transferring data. This typically signals transient network problems, the database server terminating abnormally, or timeout limits being exceeded during large data transfers.
Connect to the database using psql to ensure the server is responsive and not restarting:
psql -h localhost -U postgres -d postgres -c "SELECT version();";If this fails, start PostgreSQL:
sudo systemctl start postgresqlCheck the PostgreSQL logs for errors:
sudo tail -f /var/log/postgresql/postgresql.logRun a simple connection test before attempting the full dump:
pg_isready -h localhost -p 5432 -U postgresFor remote servers, verify you can reach the host:
ping <server_host>
telnet <server_host> 5432A too-aggressive statement_timeout can kill pg_dump mid-operation. Increase it for the dump:
pg_dump -h localhost -U postgres -d mydb --statement-timeout=0 > backup.sqlOr set it on the server (PostgreSQL config):
statement_timeout = 300000 # 5 minutes in millisecondsRestart PostgreSQL after changing the config:
sudo systemctl restart postgresqlIf pg_dump hangs waiting for table locks, set a lock timeout:
pg_dump -h localhost -U postgres -d mydb --lock-wait-timeout=30s > backup.sqlParallel dumps open multiple connections. If you're using -j (jobs), reduce it or increase the server limit:
Check current max_connections:
psql -h localhost -U postgres -d postgres -c "SHOW max_connections;"Reduce parallel jobs in pg_dump:
pg_dump -h localhost -U postgres -d mydb -j 2 > backup.sqlOr increase max_connections in postgresql.conf:
max_connections = 300Restart PostgreSQL:
sudo systemctl restart postgresqlAvoid password prompts that could timeout:
cat > ~/.pgpass <<EOF
localhost:5432:mydb:postgres:mypassword
EOF
chmod 600 ~/.pgpassThen run pg_dump without password arguments:
pg_dump -h localhost -U postgres -d mydb > backup.sqlIf a specific large table is causing timeouts, dump selectively:
# Dump schema only
pg_dump -h localhost -U postgres -d mydb -s > schema.sql
# Dump specific large table with verbose output
pg_dump -h localhost -U postgres -d mydb -t large_table -v > large_table.sql
# Dump all others
pg_dump -h localhost -U postgres -d mydb --exclude-table=large_table > data.sqlVersion mismatches can cause unexpected disconnections. Verify versions match:
# Check pg_dump version
pg_dump --version
# Check server version
psql -h localhost -U postgres -d postgres -c "SELECT version();"If they differ significantly (e.g., pg_dump from PostgreSQL 12 vs server 16), upgrade pg_dump to match the server.
For network-heavy dumps over WAN connections, consider using compression to reduce bandwidth: pg_dump -h remote_host -U postgres -d mydb -F c > backup.dump. If using SSL, connection renegotiation can cause issues on older servers; try PGSSLMODE=require with sslmode=require in your connection string. In containerized environments (Docker/Kubernetes), ensure adequate CPU and memory are allocated, as resource starvation during the copy operation commonly triggers timeouts. For very large databases, consider splitting the dump by schema or table using -n or -t flags and combining files afterward.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL