A query exceeded the allowed execution time. Configure statement_timeout, optimize slow queries with indexes, monitor lock contention, and adjust timeout settings based on query complexity.
The "Query timeout" error occurs when a SQL query exceeds the maximum allowed execution time defined by PostgreSQL's timeout parameters. The primary parameter is statement_timeout, which aborts any statement that takes longer than the specified duration (default is no limit). This timeout protects the database from runaway queries that consume excessive CPU, memory, and lock resources. When a timeout triggers, PostgreSQL cancels the statement and rolls back any work performed by that query, but keeps the database connection alive for retry or subsequent commands.
Connect to the database and verify the current timeout settings:
SHOW statement_timeout;
SHOW lock_timeout;
SHOW idle_in_transaction_session_timeout;These values are in milliseconds. A value of 0 disables the timeout. Check both database defaults and any session-level overrides that may apply to your connection.
Configure PostgreSQL to log long-running queries so you can identify which statements are timing out:
ALTER DATABASE mydatabase SET log_min_duration_statement = 5000; -- Log queries > 5 seconds
ALTER DATABASE mydatabase SET log_statement = 'all';Alternatively, query pg_stat_statements if enabled:
SELECT query, calls, mean_exec_time FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;This reveals which queries consume the most time.
Use EXPLAIN ANALYZE on the problematic query to understand why it's slow:
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM large_table WHERE status = 'active' ORDER BY created_at DESC LIMIT 100;Look for sequential scans on large tables where index scans should occur. Nested loop joins with many rows indicate missing indexes. Use output to identify the bottleneck.
Create indexes on frequently filtered or joined columns:
CREATE INDEX idx_status_created ON large_table(status, created_at DESC);For complex queries, use covering indexes that include columns needed for index-only scans:
CREATE INDEX idx_covering ON table_name(filter_col) INCLUDE (data_col1, data_col2);After adding indexes, re-run EXPLAIN ANALYZE to confirm it uses the index.
Refactor queries to reduce workload:
- Use JOINs instead of multiple subqueries
- Push filters down early (WHERE clauses before joins)
- Avoid SELECT * and specify only needed columns
- Use DISTINCT or GROUP BY sparingly (sorts data)
- Consider materialized views for complex aggregations
Example:
-- Bad: Full table scan then filter
SELECT * FROM orders WHERE customer_id IN (SELECT id FROM customers WHERE country = 'US') LIMIT 100;
-- Good: Join with filter first
SELECT o.* FROM orders o INNER JOIN customers c ON o.customer_id = c.id WHERE c.country = 'US' LIMIT 100;If the query legitimately requires more time after optimization, increase the timeout:
ALTER DATABASE mydatabase SET statement_timeout = '120s';Or set per-session before running long operations:
SET statement_timeout = 300000; -- 5 minutes in milliseconds
SELECT ... FROM expensive_operation;For application code, configure timeout in your driver:
// Node.js example
await client.query({ text: 'SELECT ...', statement_timeout: 120000 });If timeouts occur while waiting for locks, use lock_timeout to fail fast:
SET lock_timeout = '5s';
SET statement_timeout = '60s';Monitor blocking sessions:
SELECT pid, usename, query FROM pg_stat_activity WHERE state = 'active';
SELECT * FROM pg_locks WHERE NOT granted;Identify and kill long-running transactions that block others:
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle in transaction' AND query_start < now() - INTERVAL '10 minutes';Re-run the previously timing-out query and measure execution time:
\timing on
SELECT ...; -- Check the "Time:" outputMonitor application logs for timeout errors after deployment. Set up alerting for slow queries:
CREATE TABLE slow_query_log (query text, duration_ms int, logged_at timestamp);
ALTER DATABASE mydatabase SET log_min_duration_statement = 10000; -- Log > 10 secondsFor critical operations, test on a replica under production-like load before deploying to production.
PostgreSQL 17+ introduces transaction_timeout to limit entire transaction duration, not just individual statements. Use idle_in_transaction_session_timeout to prevent sessions from holding locks while idle between application requests. Lock contention accounts for ~42% of timeout issues in production; consider connection pooling (PgBouncer, pgpool-II, pgbouncer) to manage resource usage. For applications, implement client-side retry logic with exponential backoff. Use pg_stat_activity and pg_stat_statements to gain insight into slow queries in real-time. For write-heavy workloads, consider partitioning large tables or sharding to reduce lock contention. If using prepared statements, ensure they're not causing parameter sniffing issues. Row-level security policies can add overhead; profile them separately from query time.
ERROR: syntax error at end of input
Syntax error at end of input in PostgreSQL
Bind message supplies N parameters but prepared statement requires M
Bind message supplies N parameters but prepared statement requires M in PostgreSQL
Multidimensional arrays must have sub-arrays with matching dimensions
Multidimensional arrays must have sub-arrays with matching dimensions
ERROR: value too long for type character varying
Value too long for type character varying
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL