This error occurs when the PostgreSQL server has reached its maximum concurrent connection limit (max_connections parameter). The server rejects new connection attempts to prevent resource exhaustion. Increase max_connections, implement connection pooling, or identify and close idle connections to resolve the issue.
PostgreSQL error 'FATAL: sorry, too many clients already' indicates that the database server cannot accept new client connections because it has reached the maximum number of concurrent connections allowed. Each PostgreSQL client connection consumes server resources including shared memory, process slots, and file descriptors. The `max_connections` parameter controls this limit (default is typically 100 connections). When this threshold is exceeded, PostgreSQL rejects new connection attempts with this fatal error to prevent system overload. This is a hard limit - once reached, no new connections can be established until existing connections close or are terminated. This error is particularly common in applications with connection leaks, modern web frameworks with hot-reload capabilities, or systems with many concurrent application servers competing for database connections.
First, identify the current limit and how many connections are active:
-- Check the current max_connections setting
SHOW max_connections;
-- Check how many connections are currently active
SELECT count(*) FROM pg_stat_activity;
-- See detailed connection info
SELECT pid, usename, state, query
FROM pg_stat_activity
ORDER BY state;Compare the active connection count to the max_connections value. If active >= max_connections, you've hit the limit.
Example output:
max_connections | 100
count | 100 (or more if you have reserved slots filled)Note: PostgreSQL reserves superuser_reserved_connections (default 3) that aren't available to regular users, so the effective limit for applications is slightly lower.
To immediately free up connection slots, terminate connections that are idle or stuck:
-- Find idle connections that have been idle for over 10 minutes
SELECT pid, usename, state, state_change
FROM pg_stat_activity
WHERE state = 'idle'
AND now() - state_change > interval '10 minutes';
-- Terminate a specific connection (replace 12345 with PID)
SELECT pg_terminate_backend(12345);
-- Terminate ALL idle connections (use carefully!)
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle'
AND pid <> pg_backend_pid();Warning: Only terminate connections you're confident won't cause data loss. Active transactions that are terminated may leave locks or incomplete operations.
This is a temporary solution - connections will accumulate again unless the root cause is fixed.
Increase the max_connections parameter to allow more concurrent connections:
-- Check PostgreSQL version and current setting
SELECT version();
SHOW max_connections;
-- Increase max_connections (requires restart)
ALTER SYSTEM SET max_connections = 200;Then restart PostgreSQL:
On Linux (systemd):
sudo systemctl restart postgresqlOn macOS (Homebrew):
brew services restart postgresqlOn Docker:
docker restart <container_id>Direct postgresql.conf edit (alternative):
# Edit the config file
sudo nano /etc/postgresql/14/main/postgresql.conf
# Find and change the line:
# max_connections = 100
# To:
max_connections = 200
# Restart PostgreSQL
sudo systemctl restart postgresqlImportant: Increasing max_connections increases PostgreSQL's memory usage (about 600KB per connection). Verify your server has sufficient RAM:
# Check available memory
free -hAs a rule of thumb, don't set max_connections higher than what your system RAM can support.
Instead of increasing max_connections indefinitely, use a connection pooler to share connections efficiently:
PgBouncer (most popular lightweight option):
# Install PgBouncer
sudo apt-get install pgbouncer # Debian/Ubuntu
brew install pgbouncer # macOS
# Configure /etc/pgbouncer/pgbouncer.ini
[databases]
myapp = host=localhost port=5432 dbname=myapp user=postgres
[pgbouncer]
pool_mode = transaction # or session
max_client_conn = 1000
default_pool_size = 25
reserve_pool_size = 5
reserve_pool_timeout = 3
max_db_connections = 100
# Start PgBouncer
pgbouncer /etc/pgbouncer/pgbouncer.iniPgPool-II (more advanced, supports failover):
# Install PgPool
sudo apt-get install pgpool2
# Configure /etc/pgpool2/pgpool.conf
num_init_children = 32
max_pool = 4
child_life_time = 300Docker Compose example with PgBouncer:
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: myapp
POSTGRES_PASSWORD: secret
ports:
- "5432:5432"
pgbouncer:
image: pgbouncer:1.17
depends_on:
- postgres
ports:
- "6432:6432"
environment:
PGBOUNCER_DATABASES_HOST: postgres
PGBOUNCER_DATABASES_PORT: 5432
PGBOUNCER_DATABASES_DBNAME: myappAfter setting up PgBouncer, connect your applications to the pooler port (6432) instead of PostgreSQL directly (5432).
Check your application for connections that aren't being closed properly:
Node.js/Prisma example:
// WRONG: Creates new client on each request
app.get('/api/users', async (req, res) => {
const client = new pg.Client(dbConfig);
await client.connect(); // New connection - leaks if not closed!
const result = await client.query('SELECT * FROM users');
// Missing: await client.end();
res.json(result.rows);
});
// CORRECT: Reuse a single client instance
const client = new pg.Client(dbConfig);
await client.connect();
app.get('/api/users', async (req, res) => {
const result = await client.query('SELECT * FROM users');
res.json(result.rows);
});Using Prisma (handles pooling automatically):
// prisma.ts - Create singleton
import { PrismaClient } from '@prisma/client';
const globalForPrisma = global as unknown as { prisma: PrismaClient };
export const prisma =
globalForPrisma.prisma ||
new PrismaClient({
log: ['query'],
});
if (process.env.NODE_ENV !== 'production')
globalForPrisma.prisma = prisma;
// Usage - reuse the singleton
const users = await prisma.user.findMany();Python/SQLAlchemy example:
# WRONG: Creates new engine per request
def get_users():
engine = create_engine('postgresql://user:pass@localhost/myapp')
with engine.connect() as conn:
pass # Creates new connection pool each request
# CORRECT: Reuse engine with connection pooling
engine = create_engine(
'postgresql://user:pass@localhost/myapp',
poolclass=QueuePool,
pool_size=10,
max_overflow=20,
)
def get_users():
with engine.connect() as conn:
pass # Reuses from poolLook for patterns where new database client instances are created in request handlers or initialization code.
Development frameworks like Next.js create new database clients on file changes, exhausting connections:
Next.js / Node.js fix (singleton pattern):
// lib/db.ts
import { PrismaClient } from '@prisma/client';
const globalForPrisma = global as unknown as { prisma: PrismaClient };
export const prisma =
globalForPrisma.prisma ||
new PrismaClient({
log: process.env.NODE_ENV === 'development' ? ['error', 'warn'] : ['error'],
});
if (process.env.NODE_ENV !== 'production') {
globalForPrisma.prisma = prisma;
}In your app routes (instead of creating new clients):
import { prisma } from '@/lib/db';
export async function GET() {
const users = await prisma.user.findMany();
return Response.json(users);
}Django fix (connection pooling):
# settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'CONN_MAX_AGE': 600, # Connection pooling TTL
'OPTIONS': {
'connect_timeout': 10,
}
}
}Disable hot-reload temporarily (last resort):
# Next.js - run without hot reload
NODE_ENV=development next dev --no-swcThe issue is most acute in development with hot-reload enabled - fix by ensuring database clients are singletons cached globally.
Set up PostgreSQL's reserved connection slots and monitor to prevent future issues:
-- Adjust reserved connections for superuser operations
ALTER SYSTEM SET superuser_reserved_connections = 3;
-- Log connection usage
ALTER SYSTEM SET log_connections = on;
ALTER SYSTEM SET log_disconnections = on;
-- Restart PostgreSQL to apply changes
-- (Run: sudo systemctl restart postgresql)
-- After restart, check connection history in PostgreSQL log:
-- tail -f /var/log/postgresql/postgresql-15-main.logMonitor connections with a query:
-- See connection distribution by user and database
SELECT
datname as database,
usename as user,
count(*) as connection_count
FROM pg_stat_activity
GROUP BY datname, usename
ORDER BY connection_count DESC;
-- See which queries are using connections
SELECT
pid,
usename,
datname,
state,
query,
state_change
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY state_change;Set up a monitoring alert (pseudocode):
#!/bin/bash
# Check connections every 5 minutes
while true; do
CONN_COUNT=$(psql -t -c "SELECT count(*) FROM pg_stat_activity;")
MAX_CONN=100
THRESHOLD=$((MAX_CONN - 10))
if [ "$CONN_COUNT" -gt "$THRESHOLD" ]; then
echo "WARNING: $CONN_COUNT of $MAX_CONN connections in use!"
# Send alert to monitoring service
fi
sleep 300
doneProactive monitoring prevents hitting the limit unexpectedly.
Understanding PostgreSQL Connection Limits:
PostgreSQL connections are expensive resources. Each connection:
- Allocates 600KB+ of server memory (in shared_buffers and per-connection memory)
- Consumes a file descriptor from the OS
- Runs a separate backend process
- Takes system resources that could be used for query execution
This is why PostgreSQL doesn't allow unlimited connections by default.
Connection Pool Modes (PgBouncer):
- Session mode: Each client gets a dedicated connection for the session (safest, uses more server connections)
- Transaction mode: Connection is released after each transaction completes (more efficient, requires app to use transactions properly)
- Statement mode: Connection released after each statement (most aggressive, may break apps using multi-statement transactions)
Why This Error Happens in Production:
1. Development frameworks hot-reloading: Next.js, NestJS, Rails in development mode reinitialize clients on file changes
2. Missing connection pooling: Applications connecting directly to PostgreSQL without a pooler
3. Underestimated load: Initial max_connections setting was based on lower traffic patterns
4. Connection storms: Sudden traffic spike causes all app instances to request connections simultaneously
5. Backup or maintenance tools: pg_dump, pg_restore, or replication consuming slots
Best Practices:
1. Always use connection pooling in production - PgBouncer, PgPool, or built-in application pooling
2. Set max_connections based on memory, not arbitrarily - Each connection uses ~600KB
3. Monitor connection usage - Set up alerts at 80% capacity
4. Use the singleton pattern in application code - Create one database client instance, reuse it
5. Test connection behavior during load testing - Identify leaks before production
6. Version-specific behavior - PostgreSQL 13+ improved connection handling; upgrade if possible
Database-as-a-Service (managed) Variations:
- AWS RDS: Connection limit depends on instance type; use RDS Proxy for pooling
- Azure: Managed connections with built-in pooling via connection pooling service
- Heroku: Includes PgBouncer; ensure app uses DATABASE_URL connection string
- Railway/Render: Managed PostgreSQL with connection limits per tier
Check your provider's documentation for pooling options if using managed databases.
Debugging Connection Leaks:
Track connections over time to identify leaks:
# Every 10 seconds, log connection count
while true; do
psql -t -c "SELECT now(), count(*) FROM pg_stat_activity;" >> conn_log.txt
sleep 10
done
# Analyze: if count() increases over time, you have a leakinsufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL