This error occurs when multiple simultaneous requests try to modify the same database rows. Supabase uses PostgreSQL's SERIALIZABLE isolation level, which detects these conflicts and rejects one transaction to maintain data integrity.
When multiple transactions attempt to access and modify the same database rows at nearly the same time, PostgreSQL must choose which transaction's changes to keep. The "database conflict" error means PostgreSQL detected that your transaction conflicts with another concurrent transaction and cannot serialize both safely. This is a safety mechanism to prevent data corruption. Rather than allowing both transactions to proceed and potentially lose data, PostgreSQL aborts one transaction and requires it to be retried. This is a feature, not a bug — it protects your data integrity. The conflict typically occurs in these scenarios: - Two users updating the same record simultaneously - Bulk operations competing for the same rows - Multiple requests in rapid succession modifying shared data - Race conditions in concurrent transactions
Check your error logs and network timeline to confirm the error occurs during concurrent requests. Look for:
409 Conflict
Database conflict, usually related to concurrent requestsIf the error only happens with concurrent requests but works fine in isolation, it's definitely a concurrency conflict.
Add automatic retry logic to your application. When a conflict occurs, wait a short time and retry the operation.
// Example retry logic
async function executeWithRetry(fn, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (error.status === 409 && i < maxRetries - 1) {
// Exponential backoff: 100ms, 200ms, 400ms, etc.
await new Promise(resolve =>
setTimeout(resolve, Math.pow(2, i) * 100)
);
continue;
}
throw error;
}
}
}
// Usage
const result = await executeWithRetry(async () => {
return await supabase
.from('your_table')
.update({ column: value })
.eq('id', id);
});The retry usually succeeds because by the time the second attempt runs, the conflicting transaction has completed.
Implement a version column on tables with frequent concurrent updates. This prevents conflicts by detecting when the row changed since you last read it.
// Update your table schema to include version column
await supabase.from('users').create({
// ... other columns
version: { type: 'bigint', defaultValue: 1 }
});
// When updating, check the version
const { error } = await supabase
.from('users')
.update({
name: newName,
version: version + 1
})
.eq('id', userId)
.eq('version', version); // Only update if version matches
if (error) {
// Version mismatch means the row changed, refetch and retry
const latest = await supabase
.from('users')
.select()
.eq('id', userId)
.single();
// Retry with fresh data
}This prevents conflicts by catching data changes before they cause serialization errors.
For operations that absolutely cannot fail, use PostgreSQL's row-level locking to serialize access to specific rows.
// Lock the row for update before modifying it
const result = await supabase.rpc('lock_and_update', {
row_id: id,
new_value: value
});Create a stored procedure that uses SELECT FOR UPDATE:
CREATE OR REPLACE FUNCTION lock_and_update(row_id UUID, new_value TEXT)
RETURNS TABLE(success BOOLEAN) AS $$
BEGIN
-- This locks the row so other transactions wait
UPDATE my_table
SET value = new_value
WHERE id = row_id;
RETURN QUERY SELECT true;
END;
$$ LANGUAGE plpgsql;This forces transactions to wait instead of conflicting, but use sparingly as it reduces concurrency.
Keep database transactions as small and fast as possible. Long-running transactions are more likely to conflict with others.
// BAD: Long transaction including API calls
const transaction = await supabase.from('orders').select().single();
const externalData = await fetch('https://api.example.com/data'); // Slow!
await supabase.from('orders').update({ processed: true });
// GOOD: Minimal transaction, fetch data separately
const externalData = await fetch('https://api.example.com/data'); // Outside transaction
await supabase.from('orders').update({ processed: true });Minimize what happens inside database transactions. Fetch external data first, then perform database updates quickly.
PostgreSQL Isolation Levels: Supabase uses PostgreSQL's SERIALIZABLE isolation level by default. This is the strictest level and prevents all anomalies but requires client-side retry logic. If you need lower isolation, you can modify it per-session.
Connection Pooling: High connection counts under concurrent load can cause related errors. If you see 'max_clients' errors alongside database conflicts, upgrade your compute instance or use connection pooling more aggressively.
Row-Level Security (RLS): Conflicts can be more frequent with RLS enabled on heavily accessed tables. Performance impact is usually negligible, but consider the table's access patterns.
Batch Operations: Bulk updates are more prone to conflicts. Consider breaking large batch operations into smaller chunks with delays between them.
Supavisor: Supabase's connection pooler (supavisor) defaults to session mode. For transaction mode, connect to port 6543 instead of 5432 — but be aware transaction mode has limitations.
email_conflict_identity_not_deletable: Cannot delete identity because of email conflict
How to fix "Cannot delete identity because of email conflict" in Supabase
mfa_challenge_expired: MFA challenge has expired
How to fix "mfa_challenge_expired: MFA challenge has expired" in Supabase
phone_exists: Phone number already exists
How to fix "phone_exists" in Supabase
StorageApiError: resource_already_exists
StorageApiError: Resource already exists
email_address_not_authorized: Email sending to this address is not authorized
Email address not authorized for sending in Supabase Auth