PostgreSQL enforces a 268MB size limit on JSONB array elements to prevent memory issues. Split large arrays into multiple smaller arrays or normalize data into relational tables.
This error occurs when you attempt to create or store a JSONB array that exceeds the maximum size limit of 268,435,455 bytes (approximately 256 MB). PostgreSQL imposes this hard-coded limit to protect memory usage and maintain predictable query planning. The error is not specific to the number of elements, but rather the total byte size of the array contents. This limit is enforced at the JSONB data structure level and cannot be bypassed without recompiling PostgreSQL.
Before attempting a fix, identify how large your JSON array actually is. Run a test query to see the byte size:
SELECT octet_length(your_json_column::text) AS byte_size FROM your_table;If the result exceeds 268,435,455 bytes, you have confirmed the issue.
The simplest fix is to break a single large array into multiple arrays. For example, if you are aggregating 100,000 rows into one array, split them into batches:
SELECT
(row_number() OVER () - 1) / 10000 AS batch,
jsonb_agg(row_to_json(t)) AS batch_data
FROM your_table t
GROUP BY batch;This creates multiple smaller arrays, each under the 268MB limit.
The recommended solution is to normalize your data by storing each JSON element as a separate row. Instead of aggregating into a single JSONB array:
-- Instead of this (fails with large data):
SELECT jsonb_agg(to_jsonb(t)) FROM source_table t;
-- Do this:
INSERT INTO target_table (id, data)
SELECT id, row_to_json(t) FROM source_table t;This approach improves query performance, enables better indexing, and eliminates size limits.
If you need to return JSON arrays in API responses, paginate the results instead of aggregating all rows at once:
SELECT jsonb_agg(row_to_json(t)) AS items
FROM your_table t
WHERE id > (page - 1) * 1000
LIMIT 1000;This ensures each response stays under the 268MB limit while allowing clients to iterate through all data.
If you build JSON in application logic or PL/pgSQL, add explicit size checks to prevent exceeding the limit:
DO $$
DECLARE
result jsonb := '[]'::jsonb;
max_size constant int := 268435455;
BEGIN
FOR row IN SELECT * FROM large_table LOOP
result := result || jsonb_build_array(row);
IF octet_length(result::text) > max_size THEN
RAISE EXCEPTION 'JSON array size limit exceeded';
END IF;
END LOOP;
END $$;This prevents silent failures and lets you handle the error gracefully.
The 268MB limit (268,435,455 bytes) is hard-coded in PostgreSQL's JSONB implementation, specifically in the JENTRY_OFFLENMASK constant in src/include/utils/jsonb.h. The PostgreSQL core team has confirmed this is not a bug and has no plans to increase it without significant architectural changes. There is also a separate 1GB limit for general database fields (not just JSON), and increasing either limit would require recompiling PostgreSQL from source. PostgreSQL also enforces a 65,535 member limit on JSONB objects (not arrays), tracked separately as error code 2203E. For comparison, MongoDB imposes a 16MB document size limit, making PostgreSQL more generous overall. The best practice is to denormalize your data model: instead of storing arrays, create foreign-key relationships and query with joins. This provides better performance, enables indexing on individual elements, and eliminates all size constraints.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL