PostgreSQL cannot acquire a necessary lock on a table because another transaction holds a conflicting lock. This error (code 55P03) typically occurs during DDL operations or when lock_timeout is exceeded.
The "Lock not available" error (ERROR 55P03) occurs when PostgreSQL cannot acquire a required lock on a table or row because another active transaction holds a conflicting lock. PostgreSQL uses locks to coordinate concurrent access to database objects. When one transaction needs a lock that conflicts with a lock held by another transaction, and the lock_timeout parameter is set, the waiting transaction will be aborted with this error instead of waiting indefinitely. This is a safety mechanism to prevent applications from hanging when lock contention becomes problematic.
Query the pg_stat_activity and pg_locks system views to find which transactions are holding locks:
SELECT
blocked_locks.pid AS blocked_pid,
blocked_activity.usename AS blocked_user,
blocking_locks.pid AS blocking_pid,
blocking_activity.usename AS blocking_user,
blocked_activity.query AS blocked_statement,
blocking_activity.query AS blocking_statement
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype
AND blocking_locks.database IS NOT DISTINCT FROM blocked_locks.database
AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.granted;This query shows which PIDs are blocking others and what statements are causing the contention.
Once you identify the blocking transaction PID, terminate it (if safe to do so):
SELECT pg_terminate_backend(blocking_pid);Replace blocking_pid with the actual PID from the previous query. This will close the connection holding the conflicting lock, allowing your operation to proceed. Be cautious with production systems—verify the transaction is safe to terminate before running this command.
After terminating the blocking transaction, retry your original operation. If you use an application or tool that supports automatic retries, implement exponential backoff:
import time
import random
max_retries = 5
for attempt in range(max_retries):
try:
cursor.execute("CREATE INDEX idx_name ON table_name(column_name)")
break
except Exception as e:
if "lock not available" in str(e) and attempt < max_retries - 1:
wait_time = 2 ** attempt + random.uniform(0, 1)
print(f"Lock contention, retrying in {wait_time:.1f}s")
time.sleep(wait_time)
else:
raiseIf you anticipate lock contention for specific DDL operations, set a higher lock_timeout before executing them:
SET lock_timeout = '5min';
CREATE INDEX CONCURRENTLY idx_name ON table_name(column_name);
RESET lock_timeout;Set the timeout to a value that makes sense for your workload. The default is typically no limit (waiting indefinitely). This prevents operations from hanging indefinitely while still allowing reasonable wait times.
When creating indexes on large tables, always use CREATE INDEX CONCURRENTLY instead of regular CREATE INDEX. This allows other transactions to continue without blocking:
CREATE INDEX CONCURRENTLY idx_large_table ON large_table(column_name);Concurrent index creation takes longer but does not hold exclusive locks that block writes and reads. This is the recommended approach for production databases with active workloads.
Review application code to ensure transactions are not held open longer than necessary. Long-running transactions (even idle ones in a transaction block) hold locks that can block other operations:
// BAD: Transaction held open waiting for user input
await client.query('BEGIN');
const user_input = await getUserInput(); // May take minutes
await client.query('UPDATE users SET status = $1', [user_input]);
await client.query('COMMIT');
// GOOD: Only transaction-protected operation is inside the block
const user_input = await getUserInput();
await client.query('UPDATE users SET status = $1 WHERE id = $2', [user_input, userId]);Minimize the time transactions spend waiting for external input or resources.
Lock contention is often a symptom of underlying schema or query design issues. Use pg_stat_statements to identify frequently-locked relations, and consider partitioning heavily-contended tables. The deadlock_timeout parameter (default 1s) controls how long PostgreSQL waits before checking for deadlocks; on high-concurrency systems, you may need to increase this. Also note that lock_timeout and deadlock_timeout are different: lock_timeout fails if a lock cannot be acquired within the specified duration, while deadlock_timeout only affects how often deadlock detection runs.
insufficient columns in unique constraint for partition key
How to fix "insufficient columns in unique constraint for partition key" in PostgreSQL
ERROR 42501: must be owner of table
How to fix "must be owner of table" in PostgreSQL
trigger cannot change partition destination
How to fix "Trigger cannot change partition destination" in PostgreSQL
SSL error: certificate does not match host name
SSL error: certificate does not match host name in PostgreSQL
No SSL connection
No SSL connection to PostgreSQL