PostgreSQL "Connection does not exist" error occurs when a client tries to use or reference a connection that has been closed, terminated, or never established. Common causes include connection timeouts, server restarts, pooling misconfiguration, and referencing non-existent connections. Fixing requires verifying connection parameters, checking server status, and properly managing connection pools.
PostgreSQL error "Connection does not exist" occurs when a client attempts to reference or use a connection that is no longer valid. This can happen in several scenarios: a previously established connection has been closed by the server or client, a connection pool has terminated idle connections, a network disruption severed the connection without proper cleanup, or a prepared statement or cursor references a connection that no longer exists. In PostgreSQL, connections are stateful. Once a TCP connection is established between a client and the server, the server maintains state information about that connection including active transactions, temporary tables, prepared statements, and cursors. When a connection is terminated—either intentionally or due to a timeout, network issue, or server shutdown—all connection-specific state is lost. This error is distinct from "could not connect to server" (which means connection cannot be established) or "role does not exist" (which is an authentication issue). "Connection does not exist" specifically means that a connection object that once existed is no longer valid.
The error may occur if the server crashed or is restarting. Confirm the server is healthy first.
# Check if PostgreSQL is running:
sudo systemctl status postgresql
# Or on macOS:
ps aux | grep postgres
# Try a simple connection:
psql -U postgres -d postgres -c "SELECT 1;"
# If server is down, start it:
sudo systemctl start postgresqlIf the server is running, attempt a fresh connection to verify basic connectivity works.
Connection pools often terminate idle connections. Verify pool settings and server timeout parameters.
# Check server-side idle timeout settings:
psql -U postgres -d postgres -c "SHOW idle_in_transaction_session_timeout;"
psql -U postgres -d postgres -c "SHOW tcp_keepalives_idle;"
# If using pgBouncer, check the configuration file (typically /etc/pgbouncer/pgbouncer.ini):
grep -E "idle_in_transaction_session_timeout|server_idle_timeout" /etc/pgbouncer/pgbouncer.ini
# Connection pool idle timeout is often very short (e.g., 300 seconds = 5 minutes)
# If your application holds connections longer, adjust:
# For PostgreSQL server: edit postgresql.conf
idle_in_transaction_session_timeout = 0 # 0 = disabled
# For pgBouncer: edit pgbouncer.ini
server_idle_timeout = 600 # secondsEnsure timeout values match your application's expected connection lifetime.
Always validate connections before use and properly close them after use to prevent reusing stale connections.
// Node.js + pg client example:
const { Client } = require('pg');
const client = new Client({
host: 'localhost',
port: 5432,
database: 'postgres',
user: 'postgres',
});
// Ensure connection is established before queries
await client.connect();
try {
// Validate connection with a simple query before reuse
const result = await client.query('SELECT 1');
console.log(result);
} catch (err) {
console.error('Connection error:', err.message);
// Reconnect on error
await client.end();
await client.connect();
} finally {
// Always close connection explicitly
await client.end();
}With connection pools, rely on the pool's built-in validation rather than manual connection management.
Connection poolers like pgBouncer maintain a pool of reusable connections with health checks to prevent stale connections.
# Install pgBouncer (Debian/Ubuntu):
sudo apt-get install pgbouncer
# Edit /etc/pgbouncer/pgbouncer.ini:
[databases]
mydb = host=localhost port=5432 dbname=mydb
[pgbouncer]
listen_port = 6432
mode = transaction # or session
max_client_conn = 1000
default_pool_size = 25
server_idle_timeout = 600
server_check_query = "SELECT 1" # Validates connections before reuse
# Start pgBouncer:
sudo systemctl start pgbouncer
# Verify it's running:
psql -p 6432 -U postgres -d pgbouncer -c "SHOW POOLS;"The server_check_query is critical—it ensures the server validates each connection before reusing it from the pool.
Network interruptions can silently close connections without proper cleanup.
# Test TCP connectivity to PostgreSQL:
telnet localhost 5432
# Or:
nc -zv localhost 5432
# Check firewall rules (if running on remote server):
sudo iptables -L | grep 5432
# Monitor TCP connection states:
ss -tnep | grep 5432
# On macOS, check netstat:
netstat -an | grep 5432
# For slow network issues, monitor latency and packet loss:
ping <postgres-server-ip>
mtr -r -c 100 <postgres-server-ip>If you see many TIME_WAIT or CLOSE_WAIT states, connections are not being properly closed. This indicates an application or pooler leak.
Ensure your connection string is correct and parameters haven't changed unexpectedly.
# Check current PostgreSQL connection parameters:
echo $DATABASE_URL
# Test with psql directly:
psql "$DATABASE_URL" -c "SELECT 1;"
# Extract and validate each component:
# Format: postgresql://[user[:password]@][host][:port][/dbname][?options]
echo "$DATABASE_URL" | grep -oP '(?<=//)[^@]*(?=@)' # user:password
echo "$DATABASE_URL" | grep -oP '(?=@)[^/]*' # host:portCommon issues:
- Database renamed or dropped
- User role removed or revoked privileges
- Password changed
- Connection parameters became invalid after server migration
PostgreSQL logs reveal why connections were closed, helping identify patterns.
# Check PostgreSQL log directory:
sudo -u postgres tail -f /var/log/postgresql/postgresql-*.log
# Enable query logging to debug connection issues:
# Edit /etc/postgresql/*/main/postgresql.conf:
log_connections = on
log_disconnections = on
log_min_duration_statement = 1000 # Log queries taking >1s
# Or via SQL:
ALTER SYSTEM SET log_connections = on;
ALTER SYSTEM SET log_disconnections = on;
SELECT pg_reload_conf();
# Now check logs for termination reasons:
sudo -u postgres grep "terminating" /var/log/postgresql/postgresql-*.logLook for patterns like "idle timeout", "backend process crash", or "connection reset".
For high-traffic applications, connection pool sizing is critical. A pool that's too small causes connection reuse, while one that's too large wastes server memory. A typical rule is pool_size = (2 × CPU_cores) + effective_spindle_count. Most applications use 20–30 concurrent connections.
In containerized environments (Docker, Kubernetes), connections often fail due to network policies or service discovery DNS resolution delays. Use DNS TTL settings in connection strings and implement connection retry logic with exponential backoff.
For prepared statements, ensure they're created on the same connection they're used on. Attempting to use a prepared statement name from a different connection will fail because prepared statements are connection-scoped.
Cursors declared with HOLD are the only exception—they survive transaction commits and can be used by other connections. Regular cursors are closed when the transaction ends.
When debugging, enable statement logging and TCP-level packet capture:
sudo tcpdump -i lo port 5432 -AThis reveals exactly when and why the TCP connection closes.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL