P1008 occurs when Prisma Client cannot complete database operations within the connection timeout window, typically caused by database overload, pool exhaustion, or slow queries. Solutions involve adjusting timeout settings, optimizing queries, or managing connection pools.
The P1008 error indicates that your Prisma Client attempted to execute a database operation but the operation did not complete within the configured timeout period. This timeout can occur at different stages: when establishing a connection to the database, when executing a query, or when fetching a new connection from the connection pool. Prisma manages a client-side connection pool that maintains connections to your database. When all connections are in use and a new query arrives, it waits in a queue for an available connection. If no connection becomes available within the pool timeout (default 10 seconds), P1008 is thrown. The actual cause depends on your setup: the database may be slow due to resource constraints, long-running queries may hold connections hostage, or the pool size may be too small for your workload.
First, verify that your database server is actually running and reachable:
# For PostgreSQL
psql -h <your-host> -U <your-user> -d <your-db> -c "SELECT 1"
# For MySQL
mysql -h <your-host> -u <your-user> -p <your-db> -e "SELECT 1"If the database is unreachable, verify:
- Database service is running
- Firewall rules allow your client IP
- Connection string (hostname, port, credentials) is correct
- Cloud provider hasn't restarted or migrated your database
The pool_timeout parameter controls how long Prisma waits for a connection to become available. Increase it from the default 10 seconds to give the pooler more time to allocate connections.
Update your .env.local or environment variable:
DATABASE_URL="postgresql://user:password@localhost:5432/mydb?connection_limit=10&pool_timeout=20"For migrations, ensure connect_timeout is also increased:
DATABASE_URL="postgresql://user:password@localhost:5432/mydb?connect_timeout=30&pool_timeout=20"Common values:
- pool_timeout=20 - Moderate workloads (default 10)
- pool_timeout=30 - Higher concurrency or slower databases
- pool_timeout=60 - Very large migrations or serverless
Note: PgBouncer in transaction mode caps timeout at 30 seconds; 30 is the maximum for some setups.
The connection_limit parameter sets the maximum number of connections in Prisma's pool. If your application has many concurrent requests, increase this limit.
Update your connection string:
DATABASE_URL="postgresql://user:password@localhost:5432/mydb?connection_limit=15&pool_timeout=20"Default value: num_physical_cpus * 2 + 1
Guidelines:
- Count your expected concurrent requests (e.g., Node.js worker count)
- Set connection_limit to handle that concurrency
- Monitor: Too high wastes database resources; too low causes timeouts
- For serverless, consider external pooling (PgBouncer, Supabase Connection Pooling)
Verify current connections:
-- PostgreSQL
SELECT count(*) FROM pg_stat_activity;
-- MySQL
SHOW STATUS WHERE variable_name = 'Threads_connected';Long-running queries hold connections, blocking others. Identify and optimize them:
Enable Prisma query logging to see slow queries:
const prisma = new PrismaClient({
log: ['query'],
});Run your application and look for queries taking >1 second. Then:
1. Use `select` to fetch only needed fields:
// Bad: Fetches everything
const user = await prisma.user.findUnique({ where: { id: 1 } });
// Good: Fetches only what you need
const user = await prisma.user.findUnique({
where: { id: 1 },
select: { id: true, email: true, name: true },
});2. Add database indexes on frequently filtered fields:
model Post {
id Int @id @default(autoincrement())
title String
authorId Int
author User @relation(fields: [authorId], references: [id])
@@index([authorId]) // Index for filtering
}3. Avoid N+1 queries with `include` or `select`:
// Bad: N+1 problem
const users = await prisma.user.findMany();
for (const user of users) {
const posts = await prisma.post.findMany({ where: { authorId: user.id } });
}
// Good: Single query with relations
const users = await prisma.user.findMany({
include: { posts: true },
});4. Use raw queries for complex operations:
const results = await prisma.$queryRaw`
SELECT u.id, u.name, COUNT(p.id) as post_count
FROM User u
LEFT JOIN Post p ON u.id = p.authorId
GROUP BY u.id
`;Migrations on large tables can exceed the default 5-second connect_timeout. Set it explicitly:
DATABASE_URL="postgresql://user:password@localhost:5432/mydb?connect_timeout=30&socket_timeout=30"If a migration times out but actually succeeded, resolve it:
# Check migration status
npx prisma migrate status
# If migration shows as failed but changes were applied:
npx prisma migrate resolve --rolled-back "<migration_name>"
# Or mark it as successfully applied:
npx prisma migrate resolve --applied "<migration_name>"For very large migrations, consider:
- Running migrations during low-traffic windows
- Temporarily increasing database resource limits
- Breaking migrations into smaller steps
Serverless functions (AWS Lambda, Vercel Functions) create new processes frequently, exhausting database connections. Use an external pooler:
Option 1: Prisma Accelerate (managed)
DATABASE_URL="prisma://accelerate.prisma-data.net/?api_key=YOUR_API_KEY"Option 2: PgBouncer (self-hosted)
Install and configure PgBouncer:
[databases]
mydb = host=production-db.example.com port=5432 dbname=mydb
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25Point Prisma to PgBouncer instead of the database:
DATABASE_URL="postgresql://user:password@pgbouncer-host:6432/mydb"Option 3: Supabase Connection Pooling
Enable in Supabase dashboard → Database → Connection Pooling, then use the pooling URL.
Reuse PrismaClient in serverless:
// lib/prisma.ts
import { PrismaClient } from "@prisma/client";
const prismaClientSingleton = () => {
return new PrismaClient();
};
type PrismaClientSingleton = ReturnType<typeof prismaClientSingleton>;
const globalForPrisma = globalThis as unknown as {
prisma: PrismaClientSingleton | undefined;
};
const prisma = globalForPrisma.prisma ?? prismaClientSingleton();
export default prisma;
if (process.env.NODE_ENV !== "production") globalForPrisma.prisma = prisma;Prisma transactions have a default 5-second timeout. Increase it if needed:
await prisma.$transaction(
async (tx) => {
// Multi-step operation
await tx.user.update({ where: { id: 1 }, data: { name: "Alice" } });
await tx.post.create({ data: { title: "New Post", authorId: 1 } });
},
{
timeout: 30000, // 30 seconds (in milliseconds)
}
);Use transactions sparingly—they also hold connections. Keep transactions short and avoid nested transactions.
Sometimes timeouts are temporary (brief database hiccups). Add exponential backoff retry:
async function executeWithRetry<T>(
fn: () => Promise<T>,
maxRetries: number = 3,
baseDelay: number = 100
): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (
attempt === maxRetries - 1 ||
!String(error).includes("P1008")
) {
throw error;
}
const delay = baseDelay * Math.pow(2, attempt);
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
throw new Error("Max retries exceeded");
}
// Usage:
const user = await executeWithRetry(() =>
prisma.user.findUnique({ where: { id: 1 } })
);This buys time for the database to recover without requiring manual intervention.
Understanding connection pool states:
The Prisma connection pool uses a state machine. When all connections are busy and a new query arrives, it enters "waiting for available connection" state. If no connection is released within pool_timeout, P1008 is raised.
Database-specific considerations:
- PostgreSQL: Supports connection_limit and pool_timeout parameters directly in the connection string. PgBouncer adds complexity but essential for serverless.
- MySQL: Similar parameters, but connection pooling behavior differs slightly. Consider ProxySQL for advanced pooling.
- SQL Server: Less mature connection pooling support; consider external pooling.
Monitoring connection usage:
Track your actual connection utilization to right-size pool settings:
-- PostgreSQL: Show active connections
SELECT datname, count(*) FROM pg_stat_activity GROUP BY datname;
-- MySQL: Monitor threads
SHOW STATUS LIKE 'Threads%';If actual usage is consistently <50% of connection_limit, your limit is probably too high. If it's >80%, increase it.
Serverless cold starts and P1008:
In serverless environments, each function invocation creates a new process. This rapidly depletes database connection limits. Solutions:
1. Use external pooling (PgBouncer, Prisma Accelerate)
2. Keep PrismaClient instance warm across invocations via global reuse
3. Set shorter function timeouts and accept that some may fail temporarily
Development vs. production:
In development, increase pool_timeout liberally (60+ seconds) since debugging and hot reloads are normal. In production, balance responsiveness (fail fast) with reliability (allow recovery).
P1013: The provided database string is invalid
The provided database string is invalid
P1000: Authentication failed against database server
Authentication failed against database server
P1010: User was denied access on the database
How to fix "P1010: User was denied access on the database" in Prisma
P5008: Usage exceeded, upgrade your plan (Accelerate)
How to fix "Usage exceeded, upgrade your plan" in Prisma Accelerate
P3021: Foreign keys cannot be created on this database
How to fix 'P3021: Foreign keys cannot be created on this database' in Prisma