Stack depth limit exceeded errors occur when PostgreSQL query execution exceeds the maximum call stack depth. This typically happens with recursive triggers, large IN clauses, or deeply nested queries. The fix depends on the root cause and may involve restructuring queries, preventing trigger recursion, or adjusting configuration parameters.
PostgreSQL maintains an execution stack to track function calls and operations. When the stack becomes too deep—exceeding the max_stack_depth parameter (default 2MB)—the database throws a "stack depth limit exceeded" error. This is a safety mechanism to prevent runaway processes from consuming all system memory and crashing the backend. The error indicates that your query or trigger logic is creating too many nested operations or infinite recursion.
Check your PostgreSQL logs and the query that triggered the error. Look for patterns:
-- If it's a large IN clause
SELECT * FROM table WHERE id IN (val1, val2, ..., val5000);
-- If it's a trigger issue, check trigger definitions
SHOW create_trigger information;
-- If it's a function, identify recursive callsUse EXPLAIN or EXPLAIN ANALYZE to understand the execution plan.
Replace large IN lists with subqueries or temporary tables, which avoid deep parsing:
-- Instead of this (causes stack depth error with 5000+ items):
DELETE FROM my_table WHERE id IN (1, 2, 3, ..., 5000);
-- Use this approach:
CREATE TEMP TABLE doomed AS VALUES (1), (2), (3), ..., (5000);
DELETE FROM my_table WHERE id IN (SELECT column1 FROM doomed);
DROP TABLE doomed;
-- Or with a CTE:
WITH doomed AS (
VALUES (1), (2), (3), ..., (5000)
)
DELETE FROM my_table WHERE id IN (SELECT * FROM doomed);This restructuring prevents the parser from creating deeply nested stack frames.
If an UPDATE trigger modifies the same table (causing infinite recursion), restructure it:
-- Problem: AFTER trigger causing recursion
CREATE TRIGGER my_trigger AFTER UPDATE ON my_table
FOR EACH ROW EXECUTE FUNCTION update_other_column();
FUNCTION update_other_column() THEN
UPDATE my_table SET other_col = NEW.some_col WHERE id = NEW.id;
END;
-- Solution 1: Use BEFORE trigger to modify NEW directly
CREATE TRIGGER my_trigger BEFORE UPDATE ON my_table
FOR EACH ROW EXECUTE FUNCTION set_other_column();
FUNCTION set_other_column() BEGIN
NEW.other_col := NEW.some_col;
RETURN NEW;
END;
-- Solution 2: Guard recursion with pg_trigger_depth()
CREATE TRIGGER my_trigger AFTER UPDATE ON my_table
FOR EACH ROW EXECUTE FUNCTION update_other_column_guarded();
FUNCTION update_other_column_guarded() BEGIN
IF pg_trigger_depth() = 1 THEN -- Only run at first level
UPDATE my_table SET other_col = NEW.some_col WHERE id = NEW.id;
END IF;
RETURN NEW;
END;The BEFORE trigger approach is preferred because it avoids the second UPDATE entirely.
If you have a recursive PL/pgSQL function, replace it with a WITH RECURSIVE CTE:
-- Instead of a recursive function (memory-intensive)
CREATE FUNCTION traverse_tree(node_id INT) RETURNS TABLE(...) AS $$
BEGIN
RETURN QUERY SELECT * FROM nodes WHERE id = node_id;
RETURN QUERY SELECT * FROM traverse_tree(parent_id) FROM nodes WHERE id = node_id;
END;
$$ LANGUAGE plpgsql;
-- Use recursive CTE (more efficient)
WITH RECURSIVE tree_traversal AS (
SELECT id, parent_id, name FROM nodes WHERE id = $1
UNION ALL
SELECT n.id, n.parent_id, n.name
FROM nodes n
INNER JOIN tree_traversal t ON n.id = t.parent_id
)
SELECT * FROM tree_traversal;Recursive CTEs use iteration instead of function calls, consuming less stack memory.
Only increase max_stack_depth if you've fixed the root cause but need additional headroom. Check current setting:
SHOW max_stack_depth; -- Default: 2MBIncrease it cautiously:
SET max_stack_depth = '4MB'; -- Session only
-- Or permanently in postgresql.conf:
-- max_stack_depth = 4MBAlter your system ulimit to ensure it's higher than max_stack_depth:
ulimit -s # Check current stack limit
ulimit -s unlimited # Increase if neededWARNING: Never set max_stack_depth > system ulimit. A runaway recursive function could crash the entire backend process.
Different databases have different IN clause limits: Oracle allows ~1000 items, SQL Server ~512, PostgreSQL ~2000 before stack depth becomes an issue (depending on system). The max_stack_depth parameter is platform-dependent and cannot be set arbitrarily—PostgreSQL validates it against kernel limits. On managed platforms like Heroku, max_stack_depth is locked and cannot be modified. When restructuring recursive operations, consider using window functions (ROW_NUMBER(), RANK()) as they often avoid deep recursion. For complex dependency graphs, materialized views can cache results and avoid repeated traversal.
ERROR: syntax error at end of input
Syntax error at end of input in PostgreSQL
Bind message supplies N parameters but prepared statement requires M
Bind message supplies N parameters but prepared statement requires M in PostgreSQL
Multidimensional arrays must have sub-arrays with matching dimensions
Multidimensional arrays must have sub-arrays with matching dimensions
ERROR: value too long for type character varying
Value too long for type character varying
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL