PostgreSQL reaches its maximum allowed connections and reserves the remaining slots for superusers only. Regular applications cannot connect and receive this error when all available non-reserved connection slots are in use. This typically occurs due to application connection leaks, insufficient max_connections setting, or high traffic without proper connection pooling.
This PostgreSQL error occurs when the server has exhausted all available connection slots allocated for regular (non-superuser) clients. PostgreSQL reserves a portion of its maximum connections exclusively for superusers and replication roles via the superuser_reserved_connections parameter (default 3 slots). When all non-reserved slots are in use, new client connections are rejected with this FATAL error. The actual limit for regular clients is calculated as: max_connections (default 100) minus superuser_reserved_connections (default 3) minus rds.rds_superuser_reserved_connections (AWS RDS specific, default 2). This safety mechanism ensures database administrators can always connect even when the database appears full, but it prevents regular applications from connecting when capacity is reached.
Connect as a superuser and examine active connections:
-- View current connection counts
SELECT datname, count(*) FROM pg_stat_activity GROUP BY datname;
-- List all active connections with details
SELECT pid, usename, application_name, state, query_start
FROM pg_stat_activity
ORDER BY query_start DESC;
-- Check max_connections and superuser_reserved_connections settings
SHOW max_connections;
SHOW superuser_reserved_connections;Identify how many connections are active and what percentage of max_connections is in use.
Free up connection slots by terminating idle or problematic connections:
-- Find idle connections (in transaction but doing nothing)
SELECT pid, usename, application_name, state, state_change
FROM pg_stat_activity
WHERE state = 'idle in transaction'
AND query_start < now() - interval '10 minutes';
-- Terminate a specific connection
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
WHERE pid = <PID>;
-- Terminate all idle connections from a specific application
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
WHERE application_name = 'app_name' AND state = 'idle';
-- Terminate connections idle for more than 30 minutes
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
WHERE state = 'idle'
AND state_change < now() - interval '30 minutes'
AND pid <> pg_backend_pid();For immediate relief, increase max_connections at runtime (requires superuser access):
ALTER SYSTEM SET max_connections = 200;
SELECT pg_reload_conf();This takes effect after PostgreSQL reloads its configuration (typically immediate but may require brief wait). However, this is temporary and resets to the configured value on next restart. The shared_buffers setting may also need adjustment: each connection reserves memory, so verify sufficient memory is available.
Edit the PostgreSQL configuration file to make the change permanent. Locate postgresql.conf:
# Find postgresql.conf location
psql -U postgres -c "SHOW config_file;"
# On Linux/macOS it's typically:
# /etc/postgresql/*/main/postgresql.conf
# /var/lib/pgsql/data/postgresql.conf
# /usr/local/var/postgres/postgresql.conf (macOS Homebrew)Edit the file and locate or add the max_connections parameter:
# Original (default)
max_connections = 100
# Increase based on load
max_connections = 300Choose a value based on your workload. As a guideline: web apps typically use 50-200, high-concurrency apps 200-500, and data warehouse environments may exceed 1000. After editing, restart PostgreSQL:
# Linux
sudo systemctl restart postgresql
# macOS Homebrew
brew services restart postgresql
# Or using pg_ctl
pg_ctl -D /path/to/data/directory restartDeploy PgBouncer between applications and PostgreSQL to multiplex many client connections into fewer database connections:
# Install PgBouncer
sudo apt install pgbouncer # Debian/Ubuntu
brew install pgbouncer # macOSEdit /etc/pgbouncer/pgbouncer.ini (or /usr/local/etc/pgbouncer.ini on macOS):
[databases]
mydb = host=localhost port=5432 dbname=mydb
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
mode = transaction
max_client_conn = 1000
default_pool_size = 25
min_pool_size = 5Applications connect to PgBouncer on port 6432 instead of PostgreSQL directly. PgBouncer maintains a small pool of connections to PostgreSQL and reuses them across many client connections. This dramatically reduces the actual database connections needed.
Review application code to ensure connections are properly closed. Connection leaks occur when:
Python (psycopg2):
# BAD: Connection never closed
conn = psycopg2.connect("dbname=mydb")
cursor = conn.cursor()
# ... query code ...
# Missing: conn.close()
# GOOD: Use context manager
import psycopg2
with psycopg2.connect("dbname=mydb") as conn:
with conn.cursor() as cursor:
# ... query code ...
# Connection automatically closedNode.js (pg):
// BAD: Connection not released
const client = new Client();
await client.connect();
const res = await client.query('SELECT ...');
// Missing: await client.end();
// GOOD: Use pools with automatic release
const pool = new Pool();
const res = await pool.query('SELECT ...');
// Connection automatically returned to poolRuby (ActiveRecord):
# GOOD: Connections managed automatically
ActiveRecord::Base.connection_pool.with_connection do |conn|
conn.execute('SELECT ...') # Released when block exits
endAlways use connection pools or context managers to ensure connections are released.
Set a timeout to automatically disconnect clients that leave transactions open without activity:
-- Set globally (requires superuser)
ALTER SYSTEM SET idle_in_transaction_session_timeout = '10min';
SELECT pg_reload_conf();
-- Or in postgresql.conf
idle_in_transaction_session_timeout = 600000 # 10 minutes in millisecondsThis prevents idle transactions from holding connection slots indefinitely. Applications receive a disconnect notice after the timeout, forcing them to reconnect. Common values range from 5-30 minutes depending on your application patterns.
Set up ongoing monitoring to prevent future exhaustion:
-- Create a monitoring query to check connection health
CREATE OR REPLACE FUNCTION check_connection_health() RETURNS TABLE (
total_connections bigint,
active_connections bigint,
idle_connections bigint,
max_allowed bigint,
reserved_slots bigint,
available_slots bigint,
usage_percent numeric
) AS $$
SELECT
(SELECT count(*) FROM pg_stat_activity) as total_connections,
(SELECT count(*) FROM pg_stat_activity WHERE state != 'idle') as active_connections,
(SELECT count(*) FROM pg_stat_activity WHERE state = 'idle') as idle_connections,
(SELECT setting::bigint FROM pg_settings WHERE name = 'max_connections') as max_allowed,
(SELECT setting::bigint FROM pg_settings WHERE name = 'superuser_reserved_connections') as reserved_slots,
(SELECT setting::bigint FROM pg_settings WHERE name = 'max_connections') -
(SELECT count(*) FROM pg_stat_activity) -
(SELECT setting::bigint FROM pg_settings WHERE name = 'superuser_reserved_connections') as available_slots,
ROUND(100.0 * (SELECT count(*) FROM pg_stat_activity)::numeric /
(SELECT setting::bigint FROM pg_settings WHERE name = 'max_connections'), 2) as usage_percent;
$$ LANGUAGE SQL;
-- Check connection health
SELECT * FROM check_connection_health();Integrate this query into monitoring systems (Prometheus, Grafana, DataDog) to alert when usage approaches critical thresholds (e.g., > 80%).
PostgreSQL allocates shared memory for each possible connection at server startup based on max_connections setting. This is why increasing max_connections requires either runtime reload (via pg_reload_conf()) or server restart to take full effect. On AWS RDS, the rds.rds_superuser_reserved_connections setting (default 2) additionally reserves slots, so actual user-available connections are further reduced. The pg_stat_activity view is essential for diagnosing connection issues—examine the query column to identify what connections are doing and which ones are stuck. For mission-critical systems, consider connection pooling mandatory: even with high max_connections, application performance degrades as connections approach the limit due to memory overhead. PgBouncer operates in three modes: session pooling (one connection per client), transaction pooling (connection per transaction), and statement pooling (connection per statement). Transaction pooling is recommended for most web applications as it balances performance with compatibility. Postgres 15+ added reserved_connections parameter distinct from superuser_reserved_connections for more granular reservation control. For replication setups, replication slots consume connection slots if replication_role is enabled—verify replication connections are accounted for when sizing max_connections. The pg_settings view shows all configuration parameters; use it to audit current settings: SELECT name, setting FROM pg_settings WHERE name LIKE '%connection%';
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL