PostgreSQL raises error 54000 when a query or schema surpasses built-in limits such as maximum columns per table, index key size, or expression nesting depth. Resolve by restructuring your schema, simplifying queries, or breaking down large operations.
PostgreSQL has hard-coded limits for various schema and query structures to maintain system stability. Error 54000 (program_limit_exceeded) is raised when an operation attempts to exceed one of these compiled limits. This could be creating a table with too many columns, building an index with keys that are too large, nesting functions too deeply, or executing queries with extremely long argument lists or GROUPING SETS.
Read the error message carefully. PostgreSQL will tell you which limit was exceeded. Common ones:
ERROR: number of columns (1664) exceeds limit (1600)
ERROR: index row size 3500 exceeds maximum 2712
ERROR: index row requires 10216 bytes, maximum size is 8191
ERROR: too many arguments for function
ERROR: stack depth limit exceededMove rarely used or sparse columns to a separate table and join on primary key:
-- Before: one table with 1700+ columns
CREATE TABLE users (
id SERIAL PRIMARY KEY,
col1 text, col2 text, ... col1700+ text
);
-- After: split into two tables
CREATE TABLE users (
id SERIAL PRIMARY KEY,
col1 text, col2 text, ... col800 text
);
CREATE TABLE user_extended_attrs (
user_id INT PRIMARY KEY REFERENCES users(id),
col801 text, col802 text, ... col1700 text
);Alternatively, use JSONB for sparse attributes:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name text,
email text,
metadata JSONB -- store optional/sparse columns here
);If your index key is too large:
-- Instead of indexing the full value directly:
CREATE INDEX idx_large_text ON my_table (large_text_column);
-- Create an index on a hash of the value:
CREATE INDEX idx_large_text_hash ON my_table USING BTREE (
md5(large_text_column)
);
-- Or use full-text search for text columns:
CREATE INDEX idx_full_text ON my_table USING GIN (to_tsvector('english', large_text_column));Instead of a huge IN clause that causes stack depth exceeded:
-- Bad: thousands of values
DELETE FROM my_table WHERE id IN (1, 2, 3, ..., 10000);
-- Good: use a temporary table and join
CREATE TEMP TABLE ids_to_delete AS SELECT * FROM (VALUES (1), (2), (3), ...) AS t(id);
DELETE FROM my_table WHERE id IN (SELECT id FROM ids_to_delete);
DROP TABLE ids_to_delete;Break complex recursive queries or deeply nested subqueries into smaller parts:
-- Instead of 100+ nested subqueries, use CTEs:
WITH layer1 AS (
SELECT * FROM table1 WHERE condition1
),
layer2 AS (
SELECT * FROM layer1 WHERE condition2
),
layer3 AS (
SELECT * FROM layer2 WHERE condition3
)
SELECT * FROM layer3;The default max_stack_depth is 2MB. As superuser, you can increase it:
SET max_stack_depth = '4MB';WARNING: Do not set max_stack_depth higher than the system's ulimit (run ulimit -s on Linux), or a runaway function could crash the PostgreSQL backend. This is especially important on hosted platforms like Heroku where you cannot adjust the system ulimit.
PostgreSQL's program limits are compiled into the binary and cannot be changed at runtime without recompiling. The 1600-column limit stems from PostgreSQL's use of a 2-byte ColumnNumber field. The 8191-byte limit per index row is related to PostgreSQL's 8KB page size. For very large IN clauses (>8000 tuples), consider using CTEs, VALUES clauses as temporary tables, or UNION ALL for readability and to avoid recursion depth issues. On managed platforms like Heroku, increasing max_stack_depth may not be possible because it's tied to system configuration (ulimit).
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL