Your PostgreSQL sequence has exhausted its maximum value and can no longer generate new IDs. This typically happens with 32-bit integer sequences that reach 2.1 billion values. Fix it by using 64-bit sequences or altering the sequence to use a larger data type.
PostgreSQL sequences generate incrementing numbers for primary keys and other auto-increment columns. Every sequence has a maximum value determined by its data type: 32-bit integers max out at 2,147,483,647, while 64-bit big integers max out at 9,223,372,036,854,775,807. When nextval() is called on a sequence that has reached its maximum value and CYCLE is not enabled, PostgreSQL throws this error. This typically indicates your table has millions or billions of rows, and the sequence cannot generate the next ID. The root cause is usually that a table was created using the SERIAL pseudo-type (which defaults to 32-bit integer) rather than BIGSERIAL (which uses 64-bit bigint). As tables grow over time, they eventually consume all available sequence values, blocking all insert operations until the sequence is fixed.
Check your application logs or PostgreSQL error logs to find the sequence name mentioned in the error.
-- Example error mentions specific sequence
-- ERROR: nextval: reached maximum value of sequence "users_id_seq"Note the exact sequence name from the error message.
Connect to your PostgreSQL database and query the sequence information:
SELECT sequencename, data_type, start_value, max_value, last_value
FROM pg_sequences
WHERE sequencename = 'users_id_seq';If data_type is integer, it maxes out at 2,147,483,647. If it is bigint, it maxes out much higher.
If you can safely reset your IDs (no foreign key references or data dependencies), restart the sequence:
ALTER SEQUENCE users_id_seq RESTART WITH 1;Warning: Only use this if:
- The table has few rows or old rows can be archived
- No other tables reference these IDs via foreign keys
- You understand the business impact of reusing IDs
For most production systems, this is not safe. Use the long-term fix below.
Alter the sequence to use 64-bit bigint, which has virtually unlimited values:
ALTER SEQUENCE users_id_seq AS bigint;Then alter the table column to match:
ALTER TABLE users ALTER COLUMN id TYPE bigint USING id::bigint;Note: If the column is a primary key, this may require dropping and recreating the constraint:
ALTER TABLE users DROP CONSTRAINT users_pkey;
ALTER TABLE users ALTER COLUMN id TYPE bigint;
ALTER TABLE users ADD PRIMARY KEY (id);If other tables have foreign keys pointing to this table, update those columns too:
-- Example: orders.user_id references users.id
ALTER TABLE orders ALTER COLUMN user_id TYPE bigint;Repeat for all columns that reference the affected ID column.
For new tables, never use SERIAL. Instead:
Option A: BIGSERIAL (simpler, but SERIAL pseudo-type is deprecated)
CREATE TABLE users (
id BIGSERIAL PRIMARY KEY,
name TEXT NOT NULL
);Option B: IDENTITY (SQL standard, recommended)
CREATE TABLE users (
id bigint GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
name TEXT NOT NULL
);Identity columns are the modern PostgreSQL approach and offer better control and portability.
If changing the column type causes a table lock (which can take hours on very large tables), consider:
1. Concurrent rewrite with pg_repack (extension): Allows other transactions to continue while the column type is changed.
sudo apt-get install postgresql-<version>-repack
pg_repack -U postgres -d mydb -t users2. Zero-downtime migration: Create a new table with bigint ID, backfill data, then swap tables with a brief lock.
3. CYCLE option: As a temporary emergency measure, enable CYCLE on the sequence to wrap around. Note this will generate duplicate IDs if restart value is not carefully chosen:
ALTER SEQUENCE users_id_seq CYCLE;This is not recommended for production as it can break referential integrity.
4. Monitoring: Set up alerts to warn when sequences approach maximum:
SELECT sequencename, ROUND((last_value::float / max_value::float) * 100, 2) as percent_used
FROM pg_sequences
WHERE last_value > (max_value * 0.8)
ORDER BY percent_used DESC;vacuum failsafe triggered
How to fix "vacuum failsafe triggered" in PostgreSQL
PANIC: could not write to file
How to fix PANIC: could not write to file in PostgreSQL
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL