PostgreSQL error 57014 occurs when a query exceeds the configured statement_timeout limit and is automatically terminated. This protects the server from runaway queries. Increase the timeout, optimize the query, or check for locks causing delays.
Error 57014 (QUERY_CANCELED) is raised when PostgreSQL automatically cancels a query because it exceeded the maximum time allowed to execute. The statement_timeout parameter sets a limit on how long any single SQL statement is allowed to run. When a query surpasses this limit, PostgreSQL terminates it and returns this error to prevent long-running queries from consuming server resources indefinitely. By default, statement_timeout is set to 0 (disabled). However, many deployments or application frameworks set explicit timeouts to prevent runaway queries. The timeout applies individually to each SQL statement—if multiple statements are sent together, each statement has its own timeout countdown.
First, determine what timeout limit is currently configured for your session or database.
SHOW statement_timeout;This will display the current timeout value (e.g., "60s", "30000ms", or "0" for disabled). Also check if it is set at different levels:
-- Session level
SHOW statement_timeout;
-- Database level
SELECT datconfig FROM pg_database WHERE datname = 'your_database';If the timeout is genuinely too short for your query, increase it for your current session:
SET statement_timeout = '120s';You can use various time units: "us" (microseconds), "ms" (milliseconds), "s" (seconds), "min" (minutes), "h" (hours), or "d" (days). For example:
- SET statement_timeout = '5min'; — 5 minutes
- SET statement_timeout = '300000'; — 300 seconds (in milliseconds)
- SET statement_timeout = '0'; — Disable timeout (unlimited)
If you only need to increase the timeout for a specific transaction or operation, use SET LOCAL. This change applies only to the current transaction and automatically reverts afterward:
BEGIN;
SET LOCAL statement_timeout = '600s';
-- Your long-running query here
COMMIT;This approach is safer and doesn't affect other sessions.
Rather than just increasing the timeout, investigate why the query is slow and fix the underlying performance issue.
Analyze the query execution plan:
EXPLAIN ANALYZE
SELECT * FROM large_table WHERE expensive_condition;Look for:
- Sequential scans where index scans would be better
- Missing indexes on frequently filtered columns
- Join operations that could be optimized
Create missing indexes:
CREATE INDEX idx_column_name ON table_name(column_name);Run ANALYZE to update statistics:
ANALYZE table_name;Optimizing the query is always preferable to increasing timeouts.
Sometimes timeouts occur because the query is blocked waiting for locks on tables. Check what is happening on the server:
SELECT * FROM pg_stat_activity WHERE state = 'active';This shows all active queries. If you see another query with waiting = true, it is waiting for a lock. To see what locks exist:
SELECT * FROM pg_locks WHERE NOT granted;If you identify a blocking query, you can terminate it (use with caution):
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid != pg_backend_pid() AND query LIKE '%old_query%';If you want to increase the timeout for all sessions connecting to a specific database, use ALTER DATABASE:
ALTER DATABASE your_database SET statement_timeout = '120s';For all databases and users, modify postgresql.conf (requires server restart):
statement_timeout = '120s'Then reload the configuration:
SELECT pg_reload_conf();Note: Setting a global timeout is not recommended because it affects all sessions. Use session-level or database-level settings instead.
If encountering this error during AWS DMS or other migration operations, the timeout may be enforced by the tool itself.
For AWS DMS:
- Increase the statement_timeout on the target database
- Use Task Settings > Advanced → Turn on BatchApplyEnabled for faster writes
- Increase LOB chunk size if migrating large objects
For application frameworks:
- Check your connection string for timeout parameters
- For Node.js (pg library): statement_timeout: 120000
- For Python (psycopg2): options='-c statement_timeout=120000'
- For Java (JDBC): socketTimeout=120000
Understanding PostgreSQL Timeout Parameters: PostgreSQL provides several timeout settings beyond statement_timeout. The idle_in_transaction_session_timeout prevents sessions from staying idle inside a transaction, which can lock resources. The lock_timeout limits how long a query waits to acquire a lock on a table or row. Transaction_timeout (PostgreSQL 17+) limits the total duration of a transaction regardless of individual statement times. When setting timeouts, balance between preventing runaway queries and allowing legitimate long-running operations. For data warehouse workloads, reporting queries, and batch jobs, consider running them against read replicas or using separate connection pools with higher timeouts. Connection pools like pgBouncer can also implement statement-level timeout policies independent of the database server. In containerized environments (Docker, Kubernetes), network latency can cause unexpected timeout behavior—always monitor actual query execution times vs. configured limits.
ERROR: syntax error at end of input
Syntax error at end of input in PostgreSQL
Bind message supplies N parameters but prepared statement requires M
Bind message supplies N parameters but prepared statement requires M in PostgreSQL
Multidimensional arrays must have sub-arrays with matching dimensions
Multidimensional arrays must have sub-arrays with matching dimensions
ERROR: value too long for type character varying
Value too long for type character varying
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL