Heroku PostgreSQL plans have connection limits (20 for Essential, 120+ for Standard). When your application opens too many connections without reusing them, you hit this limit. Fix it by implementing connection pooling with pgBouncer, reducing your connection pool size, or upgrading your plan.
Heroku PostgreSQL database plans enforce strict connection limits based on your tier. When your application or its processes exceed the maximum number of simultaneous connections allowed, new connection attempts fail with a "Connection limit reached" error. This typically happens when connections are opened for each request without being reused in a pool, when connections are not properly closed after use (connection leaks), or when you have too many application processes (dynos) each opening independent connections. Essential-tier databases support only 20 connections per role, while Standard-tier plans support 120-400+, depending on the tier.
First, determine how many connections are currently open and identify what is consuming them:
# See connection stats for your database
heroku pg:info --app your-app-name
# View active connections and their sources
heroku pg:ps --app your-app-name
# Kill all connections if needed (emergency measure)
heroku pg:killall --app your-app-nameThe heroku pg:ps output shows the process ID (pid), username, and query of each connection. Look for patterns: are connections from your web dynos, workers, or external tools?
The recommended solution is to use Heroku's pgBouncer add-on or connection pooling. This pools connections on the server side, allowing many application connections to share fewer database connections:
# Add pgBouncer to your Heroku app
heroku addons:create heroku-postgresql:mini -a your-app-name
# Or if pgBouncer buildpack, add the buildpack
heroku buildpacks:add https://github.com/heroku/heroku-buildpack-pgbouncer.git -a your-app-nameOnce pgBouncer is enabled, update your DATABASE_URL environment variable to point to the pgBouncer instance. Heroku will provide a separate pgBouncer connection URL. This allows your app to open many logical connections to pgBouncer while pgBouncer maintains fewer actual database connections.
For Node.js with node-postgres, use a connection pool and configure pool size appropriately:
import { Pool } from 'pg';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 10, // Reduce from default 10 per dyno
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
});
// Use the pool for all queries
const result = await pool.query('SELECT * FROM users');For Prisma ORM, configure the connection string in .prisma/schema.prisma:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
// Prisma uses a connection pool internally
// Default pool size is 10; reduce for constrained environments
}For other frameworks (Rails, Django, etc.), consult their documentation for connection pool configuration and keep the pool size modest (5-15 depending on dyno tier).
If you have multiple application dynos or background workers, each opening independent connections, reduce the total number:
# Scale down web dynos
heroku ps:scale web=1 -a your-app-name
# Check current process types
heroku ps -a your-app-name
# Scale specific process types
heroku ps:scale worker=0 -a your-app-name # Disable workers if not neededEach dyno that connects to the database consumes connections. If you have 5 web dynos with a connection pool of 10, you're using 50 connections. Consider using Heroku Scheduler for periodic tasks instead of always-running worker dynos.
Ensure all database connections are properly closed in error cases:
// Bad: connection may not close if query fails
const client = await pool.connect();
const result = await client.query('SELECT * FROM users');
client.release();
// Good: connection released even if query fails
const client = await pool.connect();
try {
const result = await client.query('SELECT * FROM users');
} finally {
client.release();
}
// Best: use pool.query() directly (handles cleanup automatically)
const result = await pool.query('SELECT * FROM users');Always use try/finally or the pool's query methods directly rather than holding onto connections. In middleware-based frameworks, ensure connections opened in middleware are also closed after the response.
If you have preboot enabled, it temporarily doubles connection usage during deployments. Disable it if connection count is marginal:
# Check if preboot is enabled
heroku features -a your-app-name
# Disable preboot
heroku features:disable preboot -a your-app-nameWith preboot disabled, new dynos are fully booted before old ones are terminated, but connection usage won't spike during deployment.
If you've optimized pooling and connection usage but still need more capacity, upgrade to a higher Heroku Postgres tier:
# List available plans
heroku addons:plans heroku-postgresql
# Upgrade to a larger tier (e.g., Standard-2 with 400 connections)
heroku addons:upgrade heroku-postgresql:standard-2 DATABASE -a your-app-nameNote: Heroku does not provide a way to increase connection limits on a given tier. Upgrade is the only way to get more connections. Compare the cost vs. optimizing your application's connection usage.
Important considerations: Heroku has firm connection limits per tier that cannot be adjusted—upgrade is the only path to more connections. When running database migrations in a release phase (run on a one-off dyno), those migrations consume connections that don't count toward your running dyno limits, but still consume database resources. If using Heroku Scheduler for periodic tasks, each scheduled job uses a one-off dyno that opens database connections. Third-party tools like monitoring services (New Relic, DataDog) and reporting tools may also maintain persistent connections. Account for all of these when calculating your connection needs. Connection pooling with pgBouncer (transaction pooling mode) can multiplex thousands of application connections to the database using far fewer actual connections, making it highly effective for this problem. When debugging connection exhaustion, use heroku pg:ps repeatedly to watch for connections that never close or that originate from unexpected sources.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL