This PostgreSQL error (ERROR 23505) occurs when an INSERT or UPDATE operation tries to add a value that already exists in a column with a unique constraint or primary key. Common causes include out-of-sync sequences, duplicate data, or concurrent inserts. Fixing requires identifying the constraint, checking for existing duplicates, or using INSERT...ON CONFLICT.
PostgreSQL Error 23505 (Unique Constraint Violation) occurs when you attempt to insert or update a record with a value that violates a unique constraint. This constraint could be a primary key, a unique index, or an explicitly defined unique constraint on one or more columns. The error message typically includes the constraint name, like "ERROR: duplicate key value violates unique constraint "users_email_key"" or "ERROR: duplicate key value violates unique constraint "table_pkey"" for primary keys. Unlike application-level validation, PostgreSQL enforces these constraints at the database level, ensuring data integrity even if buggy code or concurrent operations attempt to bypass safeguards. The error is PostgreSQL's way of protecting your data—it refuses to compromise data consistency by allowing duplicates. Understanding when and why this error occurs is essential for building reliable database applications, especially in multi-user environments where concurrent inserts are common.
The error message includes the constraint name, which tells you which column(s) are causing the problem.
-- Error example:
-- ERROR: duplicate key value violates unique constraint "users_email_key"
-- DETAIL: Key (email)=([email protected]) already exists.
-- Query the constraint details:
SELECT constraint_name, table_name, column_name
FROM information_schema.key_column_usage
WHERE table_name = 'users' AND constraint_name = 'users_email_key';
-- For primary key violations:
SELECT constraint_name, table_name
FROM information_schema.table_constraints
WHERE table_name = 'your_table' AND constraint_type = 'PRIMARY KEY';Note the column(s) involved—this guides your next investigation.
Find which values already exist to understand the conflict.
-- Find all duplicates in a column:
SELECT email, COUNT(*) as count
FROM users
GROUP BY email
HAVING COUNT(*) > 1;
-- For composite unique constraints (multiple columns):
SELECT user_id, post_id, COUNT(*) as count
FROM likes
GROUP BY user_id, post_id
HAVING COUNT(*) > 1;
-- Check if the specific value you're trying to insert already exists:
SELECT * FROM users WHERE email = '[email protected]';If duplicates exist in the table, you must decide: delete the duplicates, or keep only one copy before proceeding.
Out-of-sync sequences are the most common cause when inserting new (non-duplicate) data. Compare the maximum ID in the table with the next value the sequence will generate.
-- Find the sequence name (typically table_id_seq):
SELECT pg_get_serial_sequence('users', 'id');
-- Result: public.users_id_seq
-- Check the sequence's current value:
SELECT nextval('users_id_seq');
-- Check the maximum ID in the table:
SELECT MAX(id) FROM users;
-- If nextval() returns a value LESS than or equal to MAX(id), the sequence is out of sync.
-- Fix by resetting the sequence to the next available value:
SELECT setval('users_id_seq', (SELECT MAX(id) FROM users) + 1);
-- Verify the fix:
SELECT nextval('users_id_seq');
-- This should now be MAX(id) + 2 (one call used +1, the next call adds another +1)Always be conservative: use MAX(id) + 1, not just MAX(id), to leave room for the next insert.
The ON CONFLICT clause prevents errors when duplicates are expected or acceptable. Choose the appropriate action:
Option A: Skip the insert if a conflict occurs (DO NOTHING)
INSERT INTO users (id, email, name)
VALUES (1, '[email protected]', 'John')
ON CONFLICT (email) DO NOTHING;
-- If [email protected] exists, the INSERT is silently skippedOption B: Update the existing row instead (UPSERT)
INSERT INTO users (id, email, name)
VALUES (1, '[email protected]', 'John Doe')
ON CONFLICT (email) DO UPDATE SET
name = EXCLUDED.name,
updated_at = NOW();
-- If [email protected] exists, update the name insteadOption C: For composite unique constraints
INSERT INTO likes (user_id, post_id, created_at)
VALUES (5, 10, NOW())
ON CONFLICT (user_id, post_id) DO NOTHING;The ON CONFLICT clause is the recommended approach for high-volume inserts where duplicates are unavoidable.
If your constraint allows only one copy of each value, delete the extra rows. The safest approach uses ctid (internal row ID) to preserve the most recent or preferred row.
-- Delete duplicate rows, keeping only the first one (lowest ctid):
DELETE FROM users a
USING users b
WHERE a.ctid > b.ctid
AND a.email = b.email;
-- Alternative: keep the most recent row by created_at:
DELETE FROM users a
USING users b
WHERE a.email = b.email
AND a.created_at < b.created_at;
-- Verify the duplicates are gone:
SELECT email, COUNT(*) FROM users GROUP BY email HAVING COUNT(*) > 1;Always backup your table before running DELETE statements. Test on a development copy first.
If you truncated the table and the sequence was not reset, sequences will restart from 1 and conflict with old data. Fix by restarting the sequence with RESTART IDENTITY.
-- Properly truncate and reset the sequence:
TRUNCATE TABLE users RESTART IDENTITY;
-- If the table is already truncated, manually reset:
SELECT setval('users_id_seq', 1);
-- For tables with foreign key constraints, you may need CASCADE:
TRUNCATE TABLE users CASCADE RESTART IDENTITY;RESTART IDENTITY resets all sequences for that table to their initial values.
For tables using UUIDs instead of SERIAL: UUID columns never have out-of-sync sequence issues since they don't use sequences. If you see duplicate key violations with UUIDs, the application is generating duplicate UUIDs (extremely unlikely with cryptographically secure UUID libraries) or inserting manually without checking for duplicates first.
PostgreSQL allows multiple NULL values in unique columns by default (NULL is not equal to NULL in standard SQL). If your constraint should prevent duplicate NULLs, use the NULLS NOT DISTINCT clause when creating the constraint: CONSTRAINT unique_email UNIQUE (email) NULLS NOT DISTINCT. This is useful for fields like "secondary_email" where duplicates and NULLs are both problems.
In high-concurrency environments, race conditions can cause violations even if each client checks for existence first. Between the check and the insert, another client may insert the same value. Use ON CONFLICT (DO NOTHING or DO UPDATE) or serializable transactions to handle this safely without application-level retry loops.
Index corruption is rare but possible. If you suspect it, rebuild the unique index: REINDEX INDEX index_name; This scans the table and rebuilds the index, validating consistency.
For performance: if you're doing bulk inserts with expected duplicates, use ON CONFLICT to avoid failed transactions. Failed inserts cause transaction rollback and wasted work. ON CONFLICT DO NOTHING is much faster than catching exceptions in the application.
ERROR: syntax error at end of input
Syntax error at end of input in PostgreSQL
Bind message supplies N parameters but prepared statement requires M
Bind message supplies N parameters but prepared statement requires M in PostgreSQL
Multidimensional arrays must have sub-arrays with matching dimensions
Multidimensional arrays must have sub-arrays with matching dimensions
ERROR: value too long for type character varying
Value too long for type character varying
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL