ERROR 1205 occurs when a transaction waits longer than the innodb_lock_wait_timeout limit (default 50 seconds) to acquire a row lock held by another transaction. This typically happens when transactions are slow, lock contention is high, or connections are idle but uncommitted. Fix by optimizing query performance, reducing transaction duration, increasing the timeout threshold, or killing blocking connections.
MySQL ERROR 1205 (SQLSTATE HY000) indicates that an InnoDB transaction exceeded the lock wait timeout while attempting to acquire an exclusive or shared row lock. This error occurs when a statement needs to lock a row but another transaction is holding that lock, forcing the requesting transaction to wait. If the wait exceeds the innodb_lock_wait_timeout parameter (default 50 seconds), the statement fails and the transaction is rolled back (by default, only the current statement is rolled back unless innodb_rollback_on_timeout=1). Unlike deadlocks (ERROR 1213), which are detected and resolved automatically, lock wait timeouts are simply time-based failures. They indicate resource contention, slow transactions, or idle connections holding locks. The key distinction: InnoDB automatically detects and breaks deadlocks, but lock timeouts depend on the configured timeout value and explicit transaction management.
The first step is to identify which transactions are blocking others. Run this immediately when ERROR 1205 occurs to capture lock contention details.
-- View detailed InnoDB transaction and lock information
SHOW ENGINE INNODB STATUS;
-- Look for the "TRANSACTIONS" section which shows:
-- - All active transactions
-- - Locks each transaction holds
-- - Locks each transaction is waiting for
-- - Lock wait time in seconds
-- Example output shows:
-- TRANSACTIONS
-- trx id 12345, active 120 sec, process no 1234, OS thread id 5678
-- ...
-- LOCK WAIT 50 sec active, the following trx is waiting for this lock:Alternative query using Performance Schema (more detailed):
-- Show which transactions are waiting and which are blocking
SELECT
waiting_trx.trx_id AS waiting_transaction,
waiting_thread.processlist_id AS waiting_connection_id,
waiting_thread.processlist_user AS waiting_user,
waiting_thread.processlist_info AS waiting_query,
blocking_trx.trx_id AS blocking_transaction,
blocking_thread.processlist_id AS blocking_connection_id,
blocking_thread.processlist_user AS blocking_user,
blocking_thread.processlist_info AS blocking_query
FROM performance_schema.data_lock_waits
JOIN information_schema.innodb_trx waiting_trx
ON waiting_trx.trx_id = data_lock_waits.requesting_trx_id
JOIN information_schema.innodb_trx blocking_trx
ON blocking_trx.trx_id = data_lock_waits.blocking_trx_id
JOIN performance_schema.threads waiting_thread
ON waiting_thread.thread_id = data_lock_waits.requesting_thread_id
JOIN performance_schema.threads blocking_thread
ON blocking_thread.thread_id = data_lock_waits.blocking_thread_id;This shows exactly which connection is blocking others and what query it's running.
Once you identify a blocking transaction that's idle or no longer needed, terminate it to release locks immediately.
-- List all active connections
SHOW PROCESSLIST;
-- Or use information_schema for more detail
SELECT
id,
user,
host,
db,
command,
time,
state,
info
FROM information_schema.processlist
WHERE command != 'Sleep'
ORDER BY time DESC;
-- Kill a specific connection (replace 12345 with actual connection ID)
KILL 12345;
-- Kill a query without closing the connection (MySQL 5.7.8+)
KILL QUERY 12345;
-- For stuck connections that don't respond to KILL:
KILL CONNECTION 12345;Important: Only kill connections you're sure about. Killing application connections can cause cascading failures.
Transaction lock timeouts often indicate that queries inside those transactions are too slow. Use the slow query log to identify problem queries.
-- Enable slow query log to capture queries taking longer than 2 seconds
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 2;
-- View current slow query log file location
SHOW VARIABLES LIKE 'slow_query_log_file';
-- Check query execution time with EXPLAIN
EXPLAIN SELECT * FROM large_table WHERE indexed_column = 'value';
-- If the EXPLAIN shows "type: ALL", the query is doing a full table scan
-- Add an index to speed it up:
CREATE INDEX idx_column ON large_table(indexed_column);Optimize queries by:
- Using EXPLAIN to identify full table scans
- Adding indexes on frequently searched columns
- Reducing the number of rows processed per statement
- Moving slow operations (file I/O, network calls) outside transactions
Transactions should be as short as possible. Moving slow operations outside the transaction reduces lock contention.
-- WRONG: Slow operations inside transaction hold locks longer
START TRANSACTION;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- This performs a network call to external service (slow!)
-- Result = call_external_payment_service();
UPDATE accounts SET transaction_status = 'pending' WHERE id = 1;
COMMIT;
-- CORRECT: Fast DB work in transaction, slow work outside
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
COMMIT;
-- Now do slow work after releasing locks
-- Result = call_external_payment_service();
-- Update status in a new, quick transaction
START TRANSACTION;
UPDATE accounts SET transaction_status = 'completed' WHERE id = 1;
COMMIT;Best practices:
- Keep transactions as short as possible
- Perform all reads/checks before opening the transaction
- Move file I/O, API calls, and network operations outside transactions
- Use READ UNCOMMITTED or READ COMMITTED isolation levels if possible (reduces lock scope)
If your workload legitimately needs longer transaction times, increase the timeout threshold. This is a temporary measure; the real fix is optimizing slow queries.
-- View current timeout setting (default is 50 seconds)
SHOW VARIABLES LIKE 'innodb_lock_wait_timeout';
-- Increase for current session (changes revert when connection closes)
SET SESSION innodb_lock_wait_timeout = 120;
-- Permanently increase in my.cnf / mysql.cnf
-- Add to [mysqld] section:
-- innodb_lock_wait_timeout = 120
-- Verify after server restart
SHOW VARIABLES LIKE 'innodb_lock_wait_timeout';Guidelines:
- Default 50 seconds is reasonable for most applications
- 100-120 seconds for batch/reporting workloads
- If you need 300+ seconds, your queries are likely too slow (fix queries instead)
- Different sessions can have different timeout values using SET SESSION
By default, MySQL only rolls back the failed statement on lock timeout, leaving the transaction open. To roll back the entire transaction automatically, enable innodb_rollback_on_timeout.
-- Check current setting (default is OFF/0)
SHOW VARIABLES LIKE 'innodb_rollback_on_timeout';
-- Enable in my.cnf / mysql.cnf (requires restart)
-- Add to [mysqld] section:
-- innodb_rollback_on_timeout = 1
-- Restart MySQL for the change to take effectExplanation:
- Default (innodb_rollback_on_timeout = 0): Only the timed-out statement is rolled back. Earlier statements in the transaction remain. Your application must handle this carefully.
- Enabled (innodb_rollback_on_timeout = 1): The entire transaction is rolled back on timeout, returning the database to pre-transaction state.
Most modern applications expect full transaction rollback on timeout, so enabling this is recommended.
Since lock timeouts can happen even with optimized code, robust applications should retry failed transactions automatically with exponential backoff.
// Example: Node.js with exponential backoff retry
async function executeWithRetry(query, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await executeQuery(query);
} catch (error) {
if (error.code === 'ER_LOCK_WAIT_TIMEOUT' && attempt < maxRetries) {
// Wait before retrying: 100ms, 200ms, 400ms, etc.
const delayMs = 100 * Math.pow(2, attempt - 1);
console.log(`Lock timeout, retrying in ${delayMs}ms (attempt ${attempt}/${maxRetries})`);
await new Promise(resolve => setTimeout(resolve, delayMs));
continue;
}
throw error; // Other errors or final attempt failed
}
}
}
// Usage
try {
await executeWithRetry('UPDATE users SET last_login = NOW() WHERE id = 123');
} catch (error) {
console.error('Transaction failed after retries:', error);
}Retry strategies:
- Exponential backoff prevents thundering herd (all retries firing at once)
- Max 3-5 retries usually sufficient (beyond that indicates systemic problem)
- Log all retries for monitoring and debugging
Chronic lock timeouts may indicate poor table design. Analyze which rows/tables are causing contention and redesign if needed.
-- Identify most-contended tables using Performance Schema
SELECT
object_schema,
object_name,
count_star AS lock_waits,
count_read AS read_waits,
count_write AS write_waits
FROM performance_schema.table_io_waits_summary_by_table
WHERE count_star > 0
ORDER BY count_star DESC
LIMIT 10;
-- Check lock wait times per table
SELECT
object_schema,
object_name,
count_star,
sum_timer_wait / 1000000000000 AS wait_time_seconds
FROM performance_schema.table_io_waits_summary_by_table
WHERE count_star > 0
ORDER BY sum_timer_wait DESC;Schema improvements for high-contention tables:
- Row-level sharding: Split users table into user_shard_1, user_shard_2, etc. based on user_id % shard_count
- Separate counters: Instead of UPDATE counters in main table (hot spots), use separate summary tables updated periodically
- Denormalization: Store frequently updated aggregates separately instead of recomputing from many row updates
- Archive old data: Move historical records to archive tables to reduce active dataset size
Lock Timeout vs Deadlock: These are distinct errors that require different solutions:
- ERROR 1205 (Lock Wait Timeout): A transaction waited too long for a lock. Indicates one transaction is blocking another. Default behavior rolls back only the failed statement.
- ERROR 1213 (Deadlock): Two or more transactions are mutually blocking each other in a circle. InnoDB automatically detects and rolls back one transaction to break the deadlock.
Isolation Levels: Lock wait behavior varies by transaction isolation level:
- SERIALIZABLE (most strict): Highest lock contention, highest timeout likelihood
- REPEATABLE READ (default MySQL): Moderate lock contention
- READ COMMITTED (lowest lock contention): Reduces lock waits and timeouts. Many high-concurrency systems use this.
- READ UNCOMMITTED (not recommended): Allows dirty reads but minimizes locks
Connection pooling: High-concurrency applications should use connection pooling with appropriate timeout settings. Connection pools should:
- Reuse connections instead of creating new ones
- Timeout idle connections that haven't executed queries in N seconds
- Queue client requests if all pool connections are busy
Metadata locks: Besides row locks, MySQL uses metadata locks (MDL) for table structure changes:
- Running an ALTER TABLE blocks all queries on that table
- Long-running queries can block ALTER TABLE, CREATE INDEX, etc.
- Use SHOW PROCESSLIST and SHOW ENGINE INNODB STATUS to spot these
Performance impact of deadlock detection: When innodb_deadlock_detect is enabled (default), InnoDB uses graph algorithms to detect cycles. On very high-concurrency systems (thousands of threads), this detection overhead can become a bottleneck. In such cases, disabling innodb_deadlock_detect and relying on innodb_lock_wait_timeout may be faster (trades faster normal operation for slower timeout handling).
EE_WRITE (3): Error writing file
How to fix "EE_WRITE (3): Error writing file" in MySQL
CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters
How to fix "CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters" in MySQL
CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed
How to fix "CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed" in MySQL
ERROR 1146: Table 'database.table' doesn't exist
How to fix "ERROR 1146: Table doesn't exist" in MySQL
ERROR 1040: Too many connections
How to fix "ERROR 1040: Too many connections" in MySQL