This error occurs when Prisma's connection pool is exhausted and cannot serve a query within the timeout period. It's most common in serverless environments, high-concurrency scenarios, or when running many parallel queries with insufficient connection limits.
When Prisma Client executes a query, it needs to acquire a database connection from its connection pool. The pool has a limited number of connections (default: `num_physical_cpus * 2 + 1`). When all connections are in use, new queries wait in a queue for up to 10 seconds (default pool timeout). If the query engine cannot obtain a connection within that time window—either because connections are held too long, there are too many concurrent queries, or the pool is too small—it throws a P2024 error and moves to the next query in the queue. This is particularly common in serverless environments (AWS Lambda, Vercel Functions, Cloudflare Workers) where each function instance creates its own connection pool, quickly exhausting the database's total connection limit. It also happens when using `Promise.all()` to run many Prisma queries in parallel without accounting for the connection limit.
Add the connection_limit parameter to your database connection URL:
# .env
DATABASE_URL="postgresql://user:password@host:5432/db?connection_limit=10"For serverless environments, start conservative:
# AWS Lambda / Vercel Functions - start with 1-3
DATABASE_URL="postgresql://user:password@host:5432/db?connection_limit=3"Calculate your limit: If you have 100 max database connections and 20 possible concurrent serverless instances, set connection_limit=5 (100 / 20 = 5).
For traditional servers: Use the default formula or increase moderately:
# Traditional server with 8 CPUs: default is 17 (8*2+1)
# Increase to 25 if needed
DATABASE_URL="postgresql://user:password@host:5432/db?connection_limit=25"If connections are busy but will free up soon, increase the time queries wait for a connection:
# Default is 10 seconds, increase to 20
DATABASE_URL="postgresql://user:password@host:5432/db?pool_timeout=20"Warning: This makes queries wait longer, which can cascade into other timeouts. Only increase if you're confident the pool will clear.
Combine with connection limit:
DATABASE_URL="postgresql://user:password@host:5432/db?connection_limit=10&pool_timeout=15"If you're using Promise.all() with many Prisma queries, limit concurrency:
Bad - exceeds connection pool:
// If you have 50 items and connection_limit=10, this will fail
const results = await Promise.all(
items.map(item => prisma.order.findUnique({ where: { id: item.id } }))
);Good - chunk requests:
// Process in batches of 5
const chunkSize = 5;
const results = [];
for (let i = 0; i < items.length; i += chunkSize) {
const chunk = items.slice(i, i + chunkSize);
const chunkResults = await Promise.all(
chunk.map(item => prisma.order.findUnique({ where: { id: item.id } }))
);
results.push(...chunkResults);
}Better - use a single query:
// One query uses one connection
const results = await prisma.order.findMany({
where: { id: { in: items.map(i => i.id) } }
});Serverless environments create many Prisma Client instances, each with its own pool. Use an external pooler to share connections:
Option 1: PgBouncer (self-hosted)
# Add pgbouncer=true to disable prepared statements
DATABASE_URL="postgresql://user:password@pgbouncer:6432/db?pgbouncer=true&connection_limit=1"Option 2: Supabase Pooler (Supavisor)
# Use Supabase's transaction pooler URL
DATABASE_URL="postgresql://user:[email protected]:6543/db?pgbouncer=true"Option 3: AWS RDS Proxy
DATABASE_URL="postgresql://user:[email protected]:5432/db"Option 4: Prisma Accelerate (managed)
Prisma's managed connection pooling and caching service designed for serverless.
With an external pooler, set connection_limit=1 in serverless functions since the pooler handles connection management.
Don't create new PrismaClient instances on every function invocation:
Bad:
// Creates new client + pool every time
export async function handler(event) {
const prisma = new PrismaClient();
const data = await prisma.user.findMany();
await prisma.$disconnect(); // DON'T do this in serverless
return data;
}Good:
// Reuse client across warm starts
const prisma = new PrismaClient({
datasources: {
db: { url: process.env.DATABASE_URL }
}
});
export async function handler(event) {
const data = await prisma.user.findMany();
// Don't disconnect - reuse the connection
return data;
}Prisma Client will automatically disconnect after inactivity. Don't call $disconnect() in serverless functions.
Limit how many function instances run simultaneously to prevent overwhelming your database:
AWS Lambda:
aws lambda put-function-concurrency \
--function-name my-function \
--reserved-concurrent-executions 10Vercel (in project settings or vercel.json):
{
"functions": {
"api/**/*.ts": {
"maxDuration": 10,
"memory": 1024
}
}
}Formula: If your database has 100 max connections and each function uses 3 connections, limit concurrency to 30 (100 / 3 ≈ 30).
### Understanding Connection Pool Math
If you have:
- Database max connections: 100
- Serverless function connection_limit: 5
- Peak concurrent invocations: 25
You'll use 5 * 25 = 125 connections, exceeding your database's 100-connection limit. Solutions:
- Reduce connection_limit to 3 (3 * 25 = 75)
- Limit concurrency to 20 (5 * 20 = 100)
- Increase database max connections to 150
- Use an external pooler
### When Increasing Pool Size Makes It Worse
Counterintuitively, increasing connection_limit can sometimes worsen the problem. If you have 100 database connections and set connection_limit=50, only 2 app instances can run before exhausting the database.
In serverless, smaller per-instance pools (1-3) with an external pooler usually works better than large per-instance pools.
### Monitoring Connection Usage
Check active connections in PostgreSQL:
SELECT count(*) FROM pg_stat_activity WHERE datname = 'your_database';Check the max:
SHOW max_connections;If you're consistently near the max, you need a pooler or to reduce connection usage.
### Connection Leaks
If connections aren't returned to the pool, you'll eventually exhaust it. Common causes:
- Not awaiting Prisma queries properly
- Uncaught exceptions preventing cleanup
- Infinite loops holding connections
Enable query logging to identify long-running queries:
const prisma = new PrismaClient({
log: [
{ emit: 'event', level: 'query' },
],
});
prisma.$on('query', (e) => {
console.log('Query: ' + e.query);
console.log('Duration: ' + e.duration + 'ms');
});### PgBouncer Transaction vs Session Pooling
- Transaction mode: Connection held only during transactions (recommended for Prisma)
- Session mode: Connection held for entire session
Prisma works with transaction mode when you add ?pgbouncer=true to disable prepared statements.
### Prisma Accelerate Benefits
Prisma Accelerate provides:
- Managed connection pooling optimized for serverless
- Global query caching
- No infrastructure to manage
Consider it if you're spending significant time debugging connection issues.
P1013: The provided database string is invalid
The provided database string is invalid
P1000: Authentication failed against database server
Authentication failed against database server
P1010: User was denied access on the database
How to fix "P1010: User was denied access on the database" in Prisma
P5008: Usage exceeded, upgrade your plan (Accelerate)
How to fix "Usage exceeded, upgrade your plan" in Prisma Accelerate
P3021: Foreign keys cannot be created on this database
How to fix 'P3021: Foreign keys cannot be created on this database' in Prisma