The Prisma P5002 error occurs when a database query through Accelerate exceeds the global timeout limit (10 seconds for regular queries, 15 seconds for transactions). This typically happens due to insufficient database connections, slow queries, database resource constraints, or geographic latency. The fix involves increasing connection limits, optimizing queries, or adjusting timeouts.
The P5002 error in Prisma Accelerate indicates that a database request could not complete within the configured timeout period. Prisma Accelerate is Prisma's global database proxy that provides connection pooling, caching, and performance optimization for database queries. When a query is sent through Accelerate, it has specific timeout thresholds: - **10 seconds** for standard queries - **15 seconds** for interactive transactions If the query execution time (including connection pool wait time, network latency, and actual query execution) exceeds these limits, Accelerate terminates the request and returns a P5002 error. This is a protective mechanism to prevent queries from hanging indefinitely and consuming resources. However, it can surface when your database is under load, your queries are inefficient, or your connection pool is exhausted.
First, verify how many database connections are currently configured in your Accelerate setup. The default connection limit is 10, which may be insufficient for high-traffic applications.
Review your database connection string or Accelerate configuration:
// In your Prisma schema (schema.prisma)
datasource db {
provider = "postgresql" // or mysql, etc.
url = env("DATABASE_URL")
// Default connection_limit is 10 if not specified
}Check your DATABASE_URL environment variable for the connection_limit parameter:
# Print your current DATABASE_URL to see connection settings
echo $DATABASE_URL
# Look for connection_limit in the URL, e.g.:
# prisma://user:[email protected]/?connection_limit=10If using Accelerate with a proxy URL format:
# Accelerate connection string format
DATABASE_URL="prisma://[YOUR-ACCELERATE-API-KEY]@accelerate.prisma-data.net/?connection_limit=10"If using direct database connection:
# Direct database connection (PostgreSQL example)
DATABASE_URL="postgresql://user:password@host:5432/database?connection_limit=10"Document the current limit - you'll likely need to increase it.
Increase the connection_limit parameter based on your expected concurrent query load. A reasonable starting point is 2-3x your expected peak concurrent connections:
For Accelerate connections:
# Increase from default 10 to 30
DATABASE_URL="prisma://[YOUR-API-KEY]@accelerate.prisma-data.net/?connection_limit=30"For PostgreSQL direct connections:
# Increase connection limit to 30
DATABASE_URL="postgresql://user:password@host:5432/database?connection_limit=30&schema=public"For MySQL direct connections:
DATABASE_URL="mysql://user:password@host:3306/database?connection_limit=30"Important considerations when increasing the limit:
- Your database server must support the number of connections you specify
- Check your database's max_connections setting:
PostgreSQL:
# Connect to database and check max_connections
psql -c "SHOW max_connections;"MySQL:
# Check max_connections
mysql -e "SHOW VARIABLES LIKE 'max_connections';"- Reserve some connections for administrative tasks (leave 10-20% unused)
- Example: if your database allows 100 connections, set connection_limit to 80
After updating the environment variable:
.env.local
# Updated with higher connection limit
DATABASE_URL="prisma://[YOUR-API-KEY]@accelerate.prisma-data.net/?connection_limit=30"Restart your application for the new connection limit to take effect.
Many P5002 timeouts occur because individual queries take too long. Use Prisma logging to identify problematic queries:
Enable query logging in your application:
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient({
log: [
{ emit: 'stdout', level: 'query' },
{ emit: 'stdout', level: 'warn' },
{ emit: 'stdout', level: 'error' }
]
});Or use the DEBUG environment variable:
# See all Prisma queries
DEBUG=prisma* npm run devLook for queries taking more than a few seconds. Common optimization strategies:
Strategy 1: Use select to fetch only needed fields
Instead of:
// Slow - fetches all fields including large JSON/text columns
const user = await prisma.user.findUnique({
where: { id: userId },
include: { posts: true, comments: true, profile: true }
});Use:
// Fast - only fetch needed fields
const user = await prisma.user.findUnique({
where: { id: userId },
select: {
id: true,
name: true,
email: true,
posts: {
select: { id: true, title: true }
}
}
});Strategy 2: Add pagination to large result sets
Instead of:
// Slow - fetches thousands of records
const allPosts = await prisma.post.findMany();Use:
// Fast - fetch in pages
const posts = await prisma.post.findMany({
take: 50,
skip: (pageNumber - 1) * 50,
orderBy: { createdAt: 'desc' }
});Strategy 3: Use relations more efficiently
Instead of:
// N+1 query problem - very slow
const users = await prisma.user.findMany();
for (const user of users) {
const posts = await prisma.post.findMany({
where: { authorId: user.id }
});
}Use:
// Single efficient query
const users = await prisma.user.findMany({
include: {
posts: { select: { id: true, title: true } }
}
});Strategy 4: Ensure proper indexes on frequently queried columns
Add indexes in your Prisma schema:
model Post {
id String @id @default(cuid())
title String
authorId String @db.Char(24) // Add index for frequent lookups
createdAt DateTime @default(now())
@@index([authorId]) // Index for author queries
@@index([createdAt]) // Index for date range queries
}Push the migration:
npx prisma migrate dev --name add-indexesSometimes databases enter sleep/idle mode (especially with managed services), causing initial requests to timeout while they wake up. Test your database connectivity:
Test direct database connection:
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient();
async function testDatabaseConnection() {
try {
const result = await prisma.$queryRaw`SELECT NOW()`;
console.log('Database is responsive:', result);
await prisma.$disconnect();
} catch (error) {
console.error('Database connection failed:', error);
process.exit(1);
}
}
testDatabaseConnection();Run the test:
npx ts-node test-connection.tsIf the database is frequently entering sleep mode:
1. Check if your database provider has idle timeout settings
2. Configure "keep-alive" by enabling connection pooling
3. Set up a scheduled task to ping the database every few minutes:
// Keep database awake by pinging every 5 minutes
setInterval(async () => {
try {
await prisma.$queryRaw`SELECT 1`;
} catch (error) {
console.error('Keep-alive ping failed:', error);
}
}, 5 * 60 * 1000);High database load from other sources can cause P5002 timeouts. Monitor your database resource usage:
PostgreSQL - Check active connections and resource usage:
-- View active connections
SELECT pid, usename, application_name, query, state
FROM pg_stat_activity
WHERE state != 'idle';
-- View query performance
SELECT query, mean_time, calls
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;MySQL - Check active connections:
-- View active connections
SHOW PROCESSLIST;
-- Show running threads
SELECT * FROM information_schema.processlist WHERE COMMAND != 'Sleep';Look for:
- Runaway queries (queries with very high mean_time)
- Long-running analytics or backup operations
- Other applications consuming connections
- Inefficient queries executed frequently
Solutions if high contention is found:
1. Kill long-running queries (carefully, during maintenance windows):
-- PostgreSQL
SELECT pg_terminate_backend(pid) FROM pg_stat_activity
WHERE query LIKE '%expensive_operation%';2. Schedule heavy operations during off-peak hours
3. Use read replicas for analytics queries:
// Separate read replica for reports
const readPrisma = new PrismaClient({
datasources: {
db: { url: process.env.DATABASE_READ_REPLICA_URL }
}
});
// Run expensive queries on replica
const report = await readPrisma.orderHistory.findMany({
where: { status: 'completed' }
});For high-traffic applications, use connection pooling and batch multiple operations to reduce connection overhead:
Enable Prisma Accelerate with pooling (recommended):
Accelerate automatically handles connection pooling, but ensure it's configured:
# Use Accelerate URL format for automatic pooling
DATABASE_URL="prisma://[YOUR-API-KEY]@accelerate.prisma-data.net/?connection_limit=30"Use PgBouncer for PostgreSQL (if not using Accelerate):
# Install pgbouncer
sudo apt-get install pgbouncer
# Configure in pgbouncer.ini
[databases]
mydb = host=actual-db-host port=5432 dbname=mydb
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25Then connect through PgBouncer:
DATABASE_URL="postgresql://user:password@pgbouncer-host:6432/mydb?connection_limit=30"Batch multiple operations into transactions:
Instead of:
// Inefficient - 10 separate queries, 10 connection checks
for (let i = 0; i < 10; i++) {
await prisma.user.update({
where: { id: userIds[i] },
data: { status: 'active' }
});
}Use:
// Efficient - single transaction, single connection
await prisma.$transaction(
userIds.map(id =>
prisma.user.update({
where: { id },
data: { status: 'active' }
})
)
);As a last resort before contacting Prisma support, implement application-level timeout handling and exponential backoff retry logic:
Basic retry wrapper:
async function queryWithRetry<T>(
operation: () => Promise<T>,
maxRetries: number = 3,
initialDelayMs: number = 100
): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await operation();
} catch (error) {
if (
attempt < maxRetries - 1 &&
error.message?.includes('P5002')
) {
const delayMs = initialDelayMs * Math.pow(2, attempt);
console.log(`Attempt ${attempt + 1} failed, retrying in ${delayMs}ms`);
await new Promise(resolve => setTimeout(resolve, delayMs));
} else {
throw error;
}
}
}
}
// Usage
const user = await queryWithRetry(() =>
prisma.user.findUnique({ where: { id: userId } })
);Implement per-request timeout:
async function queryWithTimeout<T>(
operation: () => Promise<T>,
timeoutMs: number = 8000 // Timeout before Accelerate's 10s limit
): Promise<T> {
return Promise.race([
operation(),
new Promise<T>((_, reject) =>
setTimeout(
() => reject(new Error('Query timeout')),
timeoutMs
)
)
]);
}
// Usage
try {
const user = await queryWithTimeout(
() => prisma.user.findUnique({ where: { id: userId } }),
8000
);
} catch (error) {
console.error('Request timed out:', error);
// Handle gracefully - show cached data, fallback, etc.
}For interactive transactions with higher timeout:
Prisma interactive transactions have a 15-second timeout. If you need more time:
const result = await prisma.$transaction(
async (tx) => {
// Your transaction logic here
const user = await tx.user.findUnique({ where: { id: 1 } });
await tx.user.update({
where: { id: 1 },
data: { updated: true }
});
return user;
},
{
maxWait: 5000, // Max time to wait for lock
timeout: 15000 // Transaction timeout (can't exceed 15s)
}
);Accelerate-Specific Optimization:
Prisma Accelerate is a managed proxy that sits between your application and database. Unlike direct database connections, it has additional layers that can add latency:
1. Request routing through Accelerate infrastructure
2. Connection pooling in the proxy
3. Optional query result caching
For optimal performance with Accelerate:
// Use caching for frequently-read data
const cachedUser = await prisma.user.findUnique({
where: { id: userId },
select: { id: true, name: true, email: true } // Smaller payload
});Geographic Latency Considerations:
If your application and database are in different regions, network latency compounds the timeout issue. Request latency can easily add 50-200ms to every query. Solutions:
1. Deploy your application closer to the database region
2. Use database read replicas in your application's region
3. Cache frequently-accessed data using Redis or similar
// Cache layer example
import { createClient } from '@upstash/redis';
const redis = createClient({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
});
async function getUserCached(userId: string) {
const cached = await redis.get(`user:${userId}`);
if (cached) return JSON.parse(cached);
const user = await prisma.user.findUnique({
where: { id: userId }
});
await redis.setex(`user:${userId}`, 3600, JSON.stringify(user));
return user;
}Connection Pool Math:
A well-tuned connection pool should have:
- Minimum of 5 connections (development)
- For production: expected_concurrent_requests × 1.5
- Example: if you expect 20 concurrent requests, set connection_limit to 30
Monitor actual connection usage:
// Log connection pool stats periodically
setInterval(async () => {
const result = await prisma.$queryRaw`
SELECT count(*) as total_connections
FROM pg_stat_activity
`;
console.log('Database connections in use:', result);
}, 60000);When to Contact Prisma Support:
Contact [email protected] if:
1. You've increased connection_limit but still get P5002 errors
2. Your application has specific timeout requirements exceeding 10/15 seconds
3. You need guidance on capacity planning for your workload
4. You're experiencing intermittent P5002 errors despite all optimizations
P1013: The provided database string is invalid
The provided database string is invalid
P1000: Authentication failed against database server
Authentication failed against database server
P1010: User was denied access on the database
How to fix "P1010: User was denied access on the database" in Prisma
P5008: Usage exceeded, upgrade your plan (Accelerate)
How to fix "Usage exceeded, upgrade your plan" in Prisma Accelerate
P3021: Foreign keys cannot be created on this database
How to fix 'P3021: Foreign keys cannot be created on this database' in Prisma