Prisma successfully connected to your database server but the operation timed out before completion. This typically results from slow queries, database overload, network latency, or insufficient connection timeout settings.
The P1002 error indicates that Prisma Client established a network connection to your database server, but the database failed to respond to a query within the configured timeout period. Unlike P1001, this is not a connection-level failure—the server is reachable, but something caused the operation to hang. Common scenarios include: - A query is executing too slowly on an overloaded database - The database server is under heavy load and cannot process requests quickly - A lock acquisition (e.g., pg_advisory_lock) is timing out - The connection pool is exhausted and new connections cannot be allocated - Network latency between the application and database is too high - Large result sets are taking too long to transfer - Serverless databases are still warming up after a cold start
Add or increase the connect_timeout and pool_timeout parameters in your DATABASE_URL:
PostgreSQL:
DATABASE_URL="postgresql://user:pass@host:5432/db?connect_timeout=30&pool_timeout=30"MySQL:
DATABASE_URL="mysql://user:pass@host:3306/db?connect_timeout=30&pool_timeout=30"The timeout values are in seconds. Start with 30 seconds for testing:
- connect_timeout: Time to establish a new connection
- pool_timeout: Time to get a connection from the pool
For production, set reasonable values based on your application's latency requirements.
Verify your database isn't under excessive load:
PostgreSQL - Check active connections:
SELECT count(*) as active_connections
FROM pg_stat_activity
WHERE state = 'active';PostgreSQL - Check for slow queries:
SELECT query, mean_time, calls
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;PostgreSQL - Check for locks:
SELECT * FROM pg_locks WHERE NOT granted;If you see high load, slow queries, or many blocked locks, you need to optimize queries or scale your database resources.
Limit connection usage to prevent pool exhaustion:
Update your DATABASE_URL:
DATABASE_URL="postgresql://user:pass@host:5432/db?connection_limit=5"For serverless environments (Vercel, AWS Lambda), use even fewer connections:
DATABASE_URL="postgresql://user:pass@host:5432/db?connection_limit=1&pool_timeout=30"Add this to your Prisma Client configuration:
// prisma/schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}This forces the connection pool to be smaller and prevents resource exhaustion.
Profile your application to identify slow queries:
Enable Prisma query logging:
const prisma = new PrismaClient({
log: ['query', 'error', 'warn'],
});Add indexes to frequently queried columns:
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
@@index([email]) // Add this for faster lookups
}Use select to fetch only needed fields:
// ❌ Bad - fetches all fields
const users = await prisma.user.findMany();
// ✅ Good - only fetch needed fields
const users = await prisma.user.findMany({
select: { id: true, email: true },
});Run migrations and test performance:
npx prisma migrate devFor serverless functions (Vercel, AWS Lambda, Netlify), use a connection pooler:
Option 1: Prisma Accelerate (Recommended)
npm install @prisma/extension-accelerateimport { PrismaClient } from '@prisma/client';
import { withAccelerate } from '@prisma/extension-accelerate';
const prisma = new PrismaClient().$extends(withAccelerate());Option 2: PgBouncer (Self-hosted)
Add PgBouncer between your serverless function and PostgreSQL. Configure in pgbouncer.ini:
[databases]
mydb = host=actual-db.com port=5432 user=dbuser password=dbpass
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 10Then update DATABASE_URL to point to PgBouncer (usually localhost:6432).
Serverless databases like Neon may wake from idle and cause cold starts:
Neon - Increase keep_alive and suspend_timeout:
In Neon project settings, configure:
- connection_timeout: 30 seconds
- pool_size: 10
- Auto-suspend: Disabled (if possible) or set to 10+ minutes
Supabase - Use Supabase-specific connection string:
Update your DATABASE_URL to use Supabase's pooler:
# Change from direct connection
DATABASE_URL="postgresql://user:[email protected]:5432/postgres"
# To pooler connection (transaction mode)
DATABASE_URL="postgresql://user:[email protected]:6543/postgres"Transaction mode uses less connections but has some limitations. Check Supabase docs for compatibility.
P1002 can be a transient error. Implement retry logic:
async function withRetry<T>(
fn: () => Promise<T>,
maxAttempts = 3,
baseDelay = 100
): Promise<T> {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return await fn();
} catch (error) {
if (attempt === maxAttempts) throw error;
// Check if it's a timeout error
if (error.code === 'P1002' || error.code === 'P2024') {
const delay = baseDelay * Math.pow(2, attempt - 1);
console.log(`Retry attempt ${attempt} after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
}
// Usage
const user = await withRetry(() =>
prisma.user.findUnique({ where: { id: 1 } })
);Advisory Lock Timeouts: If the error occurs during prisma migrate deploy, the issue is likely an advisory lock timeout. This happens when migrations can't acquire exclusive locks due to long-running queries. Solution: Kill long-running queries before migrations, or increase migration timeout.
PostgreSQL Connection Limits: By default, PostgreSQL allows ~100 concurrent connections. As your application scales, monitor SELECT count(*) FROM pg_stat_activity; to ensure you're not hitting this limit. Use connection pooling to maintain a sustainable number.
Network Latency: High latency between application and database (>100ms) increases timeout likelihood. Use monitoring tools (Datadog, New Relic) to measure actual query execution times. If latency is consistently high, consider moving your database closer to your application or using CDN caching strategies.
Data Proxy / Accelerate: Prisma Data Proxy (now Prisma Accelerate) is specifically designed to handle timeout issues in serverless environments through intelligent connection pooling and caching. It's worth evaluating if you have frequent P1002 errors in production.
Load Testing: Before going to production, load test your database with tools like pgbench (PostgreSQL) to understand how your database behaves under concurrent load and determine appropriate timeout values.
Monitoring: Set up alerts on P1002 error rates. A sudden spike indicates database health issues requiring investigation. Use Prisma's logging and database metrics to correlate timeouts with high CPU, memory, or connection count.
P1013: The provided database string is invalid
The provided database string is invalid
P1000: Authentication failed against database server
Authentication failed against database server
P1010: User was denied access on the database
How to fix "P1010: User was denied access on the database" in Prisma
P5008: Usage exceeded, upgrade your plan (Accelerate)
How to fix "Usage exceeded, upgrade your plan" in Prisma Accelerate
P3021: Foreign keys cannot be created on this database
How to fix 'P3021: Foreign keys cannot be created on this database' in Prisma