PostgreSQL enforces a hard limit of 65,535 key-value pairs in JSON/JSONB objects. Exceeding this limit raises error 2203E. The fix involves restructuring your data to use nested objects or arrays instead of a single flat object.
PostgreSQL error 2203E (too_many_json_object_members) occurs when a JSON or JSONB object contains more than 65,535 key-value pairs. This is a hard-coded safety limit in the PostgreSQL source code to prevent memory exhaustion and ensure predictable performance. The parser enforces this ceiling equally for both the json and jsonb data types. When you attempt to create or update a row that results in a JSON object exceeding this limit—whether through INSERT, UPDATE, COPY, or function calls—the database transaction rolls back with error 2203E. This error is distinct from other JSON limits: it focuses on the number of members (key-value pairs) in a single object, not the total document size or individual string lengths.
Use jsonb_object_length() to confirm the object exceeds the limit:
SELECT jsonb_object_length(json_column) AS member_count
FROM your_table
WHERE id = your_id;If the result is greater than 65,535, you've hit the limit. If it's much smaller, the error may occur during construction (e.g., in a function or loop).
Instead of storing all keys at the top level, organize them into nested objects or arrays. For example, if you're building a settings object with thousands of keys:
Before (fails):
SELECT jsonb_build_object(
'setting_1', value_1,
'setting_2', value_2,
-- ... 65,535+ more keys ...
'setting_99999', value_99999
);After (works):
SELECT jsonb_build_object(
'group_a', jsonb_build_object(
'setting_1', value_1,
'setting_2', value_2
),
'group_b', jsonb_build_object(
'setting_3', value_3,
-- ... distribute keys across groups ...
)
);Each nested object still has the 65,535 member limit, but the top-level object now has far fewer keys.
If the data is naturally ordered or doesn't require key lookups, store it as a JSON array of objects instead:
-- Before: flat object with many keys
INSERT INTO table_data (id, config)
VALUES (1, jsonb_build_object(...));
-- After: array of objects
INSERT INTO table_data (id, config)
VALUES (1, jsonb_agg(jsonb_build_object('key', key, 'value', value)));Arrays also have a 65,535 element limit, but this approach is often more flexible for bulk data.
Before inserting or updating, count keys in your application code and stop adding keys once you approach 65,000:
-- Example: stop adding keys when count reaches 60,000
UPDATE your_table
SET json_column = jsonb_set(json_column, '{new_key}', to_jsonb(new_value))
WHERE id = ?
AND jsonb_object_length(json_column) < 60000;This prevents the transaction from reaching the hard limit.
If the wide JSON object represents structured data, consider storing stable attributes as relational columns instead of JSON. This improves query performance and eliminates the member limit:
-- Before: everything in JSON
CREATE TABLE records (
id INT PRIMARY KEY,
config JSONB
);
-- After: extract frequently-accessed fields
CREATE TABLE records (
id INT PRIMARY KEY,
user_name TEXT,
email TEXT,
created_at TIMESTAMP,
metadata JSONB -- only dynamic/rare fields
);This also enables indexing and faster queries.
## Function Argument Limit
If you're using json_build_object() or jsonb_build_object(), note that these functions also have a limit of 100 arguments total. Since each key-value pair requires 2 arguments, this limits you to about 50 pairs per function call. To work around this, use json_object_agg() with a subquery instead:
WITH kv_pairs AS (
SELECT 'key_1'::TEXT AS k, value_1::TEXT AS v
UNION ALL
SELECT 'key_2', value_2
-- ... as many pairs as needed ...
)
SELECT json_object_agg(k, v) FROM kv_pairs;## Related Limits
PostgreSQL also enforces a 65,535 element limit on JSON arrays (error 2203F: too_many_json_array_elements). The same restructuring strategies apply.
## TOAST and Performance
Large JSONB documents (>2 KB) may trigger PostgreSQL's TOAST mechanism, which compresses and slices the value across multiple heap pages. This can degrade performance. Restructuring not only respects the member limit but also improves query speed and storage efficiency.
ERROR: syntax error at end of input
Syntax error at end of input in PostgreSQL
Bind message supplies N parameters but prepared statement requires M
Bind message supplies N parameters but prepared statement requires M in PostgreSQL
Multidimensional arrays must have sub-arrays with matching dimensions
Multidimensional arrays must have sub-arrays with matching dimensions
ERROR: value too long for type character varying
Value too long for type character varying
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL