MySQL ERROR 1213 occurs when two or more concurrent transactions are waiting for each other to release locks, creating a circular dependency. This is a transactional database lock contention issue that requires application-level retry logic and optimized transaction design to resolve.
ERROR 1213 (40001) is an InnoDB deadlock error that occurs when two or more transactions are unable to proceed because each one holds a lock that another transaction needs. This creates a circular wait condition where neither transaction can continue. Key aspects of MySQL deadlocks: 1. **Lock Conflict**: A deadlock happens when transaction A has a lock on row 1 and waits for a lock on row 2, while transaction B has a lock on row 2 and waits for a lock on row 1. Neither can proceed. 2. **InnoDB Automatic Detection**: InnoDB automatically detects deadlocks and rolls back one of the transactions to break the cycle. The rolled-back transaction receives ERROR 1213. 3. **Row-Level Locking**: InnoDB uses row-level locking during UPDATE, INSERT, DELETE, and SELECT...FOR UPDATE operations. Even single-row operations can deadlock in high concurrency. 4. **Transactional Nature**: Deadlocks are not database errors - they are an expected aspect of optimistic concurrency in transactional databases. Your application must handle them gracefully. 5. **Not Always Preventable**: Complete deadlock prevention is impossible. Well-designed applications catch ERROR 1213 and retry the transaction.
First, enable comprehensive deadlock logging to understand what caused the deadlock:
-- Check if deadlock detection is enabled (default is ON)
SHOW VARIABLES LIKE 'innodb_deadlock_detect';
-- Enable logging of all deadlocks to error log (not just the last one)
SET GLOBAL innodb_print_all_deadlocks = ON;
-- Verify the setting
SHOW VARIABLES LIKE 'innodb_print_all_deadlocks';With innodb_print_all_deadlocks enabled, all deadlock information will be written to the MySQL error log:
# Check MySQL error log for deadlock details
sudo tail -f /var/log/mysql/error.log
# Or check the log file specified in MySQL configuration
mysql -e "SHOW VARIABLES LIKE 'log_error';"
# View the most recent deadlock
SHOW ENGINE INNODB STATUS;
# Look for the LATEST DETECTED DEADLOCK sectionThe deadlock information will show which transactions were involved and which queries caused the lock conflict.
Check your current isolation level and examine deadlock details:
-- Check current isolation level for session
SHOW VARIABLES LIKE 'transaction_isolation';
-- View InnoDB status including deadlock information
SHOW ENGINE INNODB STATUS\G
-- Look for this section in the output:
-- *** (1) TRANSACTION:
-- *** (2) TRANSACTION:
-- *** WE ROLL BACK TRANSACTION (1)The output shows:
- Transaction IDs involved
- Locks held by each transaction
- Locks being waited for
- Which transaction was rolled back by InnoDB
Example deadlock scenario from SHOW ENGINE INNODB STATUS:
*** (1) TRANSACTION:
TRX ID 123456, ACTIVE 2 sec updating or deleting
MySQL thread id 123, query id 456
*** (2) TRANSACTION:
TRX ID 123457, ACTIVE 1 sec updating or deleting
MySQL thread id 124, query id 457
*** WE ROLL BACK TRANSACTION (1)Add retry logic to handle ERROR 1213 gracefully:
// Node.js with mysql2/promise - retry with exponential backoff
async function executeWithRetry(query, params, maxRetries = 3) {
let lastError;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const connection = await mysql.createConnection(config);
const result = await connection.execute(query, params);
await connection.end();
return result;
} catch (error) {
lastError = error;
// Check if it's a deadlock error (code 1213)
if (error.code === "ER_LOCK_DEADLOCK" || error.errno === 1213) {
if (attempt < maxRetries) {
// Exponential backoff: 100ms, 200ms, 400ms
const delay = 100 * Math.pow(2, attempt - 1);
console.log(`Deadlock on attempt ${attempt}, retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw new Error(`Failed after ${maxRetries} attempts: ${error.message}`);
}
} else {
throw error; // Non-deadlock error, don't retry
}
}
}
throw lastError;
}
// Usage
try {
const result = await executeWithRetry(
'UPDATE accounts SET balance = balance - ? WHERE id = ?',
[100, account_id]
);
} catch (error) {
console.error('Transaction failed:', error);
}# Python with mysql-connector-py
import mysql.connector
import time
def execute_with_retry(query, params, max_retries=3):
last_error = None
for attempt in range(1, max_retries + 1):
connection = None
try:
connection = mysql.connector.connect(**config)
cursor = connection.cursor()
cursor.execute(query, params)
connection.commit()
result = cursor.fetchall()
cursor.close()
return result
except mysql.connector.Error as error:
last_error = error
# Check if it's a deadlock (error code 1213)
if error.errno == 1213:
if attempt < max_retries:
# Exponential backoff
delay = 0.1 * (2 ** (attempt - 1))
print(f"Deadlock on attempt {attempt}, retrying in {delay}s...")
time.sleep(delay)
else:
raise Exception(f"Failed after {max_retries} attempts: {error}")
else:
raise
finally:
if connection and connection.is_connected():
connection.close()
raise last_error// PHP with PDO
function executeWithRetry($pdo, $query, $params, $maxRetries = 3) {
$lastError = null;
for ($attempt = 1; $attempt <= $maxRetries; $attempt++) {
try {
$stmt = $pdo->prepare($query);
$stmt->execute($params);
return $stmt->fetchAll(PDO::FETCH_ASSOC);
} catch (PDOException $e) {
$lastError = $e;
// Check for deadlock error (SQLSTATE 40001)
if ($e->getCode() == '40001' || strpos($e->getMessage(), '1213') !== false) {
if ($attempt < $maxRetries) {
$delay = 100 * pow(2, $attempt - 1); // milliseconds
echo "Deadlock on attempt $attempt, retrying in {$delay}ms...";
usleep($delay * 1000);
} else {
throw new Exception("Failed after $maxRetries attempts: {$e->getMessage()}");
}
} else {
throw $e; // Non-deadlock error
}
}
}
throw $lastError;
}Organize database operations to access tables and rows in the same order:
// BAD: Inconsistent ordering causes deadlocks
// Transaction 1: locks user -> locks account
async function transfer1() {
await db.query('UPDATE users SET last_tx = NOW() WHERE id = ?', [user1]);
await db.query('UPDATE accounts SET balance = balance - 100 WHERE id = ?', [account1]);
}
// Transaction 2: locks account -> locks user (opposite order!)
async function transfer2() {
await db.query('UPDATE accounts SET balance = balance + 100 WHERE id = ?', [account1]);
await db.query('UPDATE users SET last_tx = NOW() WHERE id = ?', [user2]);
}
// GOOD: Consistent ordering prevents deadlocks
async function transfer1Ordered() {
await db.query('UPDATE accounts SET balance = balance - 100 WHERE id = ?', [account1]);
await db.query('UPDATE users SET last_tx = NOW() WHERE id = ?', [user1]);
}
async function transfer2Ordered() {
await db.query('UPDATE accounts SET balance = balance + 100 WHERE id = ?', [account1]);
await db.query('UPDATE users SET last_tx = NOW() WHERE id = ?', [user2]);
}Best practices for consistent ordering:
-- Order by primary key/ID to ensure consistent lock order
-- When updating multiple rows, lock them in ID order
UPDATE accounts
SET balance = balance - 100
WHERE id IN (1, 2, 3)
ORDER BY id; -- Explicit ordering
-- For complex transactions, acquire locks in consistent order
START TRANSACTION;
-- First: lock all accounts in order
SELECT * FROM accounts WHERE id IN (1, 2) ORDER BY id FOR UPDATE;
-- Then: update in same order
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;Improve query performance and reduce the number of rows locked:
-- Check query execution to see how many rows are scanned/locked
EXPLAIN SELECT * FROM transactions WHERE user_id = 123 AND status = 'pending';
-- If scanning many rows, add indexes
CREATE INDEX idx_transactions_user_status ON transactions(user_id, status);
-- Before update - without index (locks many rows)
UPDATE transactions SET status = 'processed' WHERE user_id = 123;
-- This locks ALL rows for user 123 until the transaction ends
-- After index - scans fewer rows and locks only matching ones
EXPLAIN UPDATE transactions SET status = 'processed' WHERE user_id = 123;
-- Verify index is being used
SHOW INDEX FROM transactions;
-- For high-traffic tables, consider composite indexes
CREATE INDEX idx_accounts_user_updated ON accounts(user_id, updated_at);
-- Use EXPLAIN to verify index usage
EXPLAIN SELECT * FROM accounts WHERE user_id = 123 AND updated_at > NOW() - INTERVAL 1 HOUR;Index impact on deadlocks:
- Without index: Query locks entire table or many rows
- With index: Query locks only rows matching the condition
- Fewer locked rows = fewer deadlock conflicts
Minimize time spent holding locks:
// BAD: Long transaction with business logic inside
async function processPayment(paymentId) {
await db.query('START TRANSACTION');
// This validation happens INSIDE the transaction, holding locks
const payment = await db.query('SELECT * FROM payments WHERE id = ?', [paymentId]);
const isValid = await externalValidationService.check(payment);
if (!isValid) throw new Error('Invalid payment');
// Business logic delay while holding locks
await sendEmail(payment.email);
// Finally update database
await db.query('UPDATE payments SET status = "processed" WHERE id = ?', [paymentId]);
await db.query('UPDATE accounts SET balance = balance + ? WHERE id = ?', [payment.amount, payment.account_id]);
await db.query('COMMIT');
}
// GOOD: Move logic outside transaction, keep locks short
async function processPaymentOptimized(paymentId) {
// Validate OUTSIDE of transaction
const payment = await db.query('SELECT * FROM payments WHERE id = ?', [paymentId]);
const isValid = await externalValidationService.check(payment);
if (!isValid) throw new Error('Invalid payment');
// Send email OUTSIDE of transaction
await sendEmail(payment.email);
// Keep database changes SHORT
await db.query('START TRANSACTION');
// Only database operations in transaction
await db.query('UPDATE payments SET status = "processed" WHERE id = ?', [paymentId]);
await db.query('UPDATE accounts SET balance = balance + ? WHERE id = ?', [payment.amount, payment.account_id]);
await db.query('COMMIT');
}
// Database isolation for short operations
START TRANSACTION;
-- Lock and update only necessary rows
UPDATE accounts SET balance = balance - 100 WHERE id = 123 FOR UPDATE;
UPDATE accounts SET balance = balance + 100 WHERE id = 456 FOR UPDATE;
COMMIT;
-- Transaction ends immediately - locks are releasedTransaction design principles:
- Move I/O (API calls, email, file operations) outside transactions
- Move validation logic before the transaction starts
- Only lock rows needed for database consistency
- Commit as soon as possible
In some cases, lowering the isolation level can reduce deadlock frequency:
-- Check current isolation level
SHOW VARIABLES LIKE 'transaction_isolation';
-- Default is REPEATABLE READ - most common cause of deadlocks in high concurrency
-- Possible values:
-- - READ UNCOMMITTED (least isolation, fewest locks)
-- - READ COMMITTED (medium isolation, fewer locks)
-- - REPEATABLE READ (high isolation, many locks) - DEFAULT
-- - SERIALIZABLE (serialized, maximum locks)
-- Change isolation level for current session
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Change globally (affects new connections)
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Verify change
SHOW VARIABLES LIKE 'transaction_isolation';IMPORTANT: Isolation level change impacts ALL transactions:
-- READ COMMITTED vs REPEATABLE READ example
BEGIN;
SELECT * FROM accounts WHERE balance < 100; -- Snapshot at READ COMMITTED
COMMIT;
-- In READ COMMITTED: Only locks rows being modified, not rows being read
-- In REPEATABLE READ: May lock range of rows affecting reads (phantom reads)
-- For most applications, keep REPEATABLE READ (default) and fix deadlocks with:
-- 1. Retry logic
-- 2. Consistent ordering
-- 3. Better indexesNote: Changing isolation level should only be done after careful analysis. The default REPEATABLE READ provides good consistency guarantees. Deadlocks are usually better solved through application design than isolation level changes.
Advanced considerations for MySQL deadlocks:
1. Deadlock is Not an Error: In transactional databases, deadlocks are an expected outcome of concurrent access. They signal lock contention that must be handled by the application. Completely eliminating deadlocks is impossible in high-concurrency systems.
2. innodb_deadlock_detect Parameter: On very high-concurrency systems (hundreds of concurrent transactions), the automatic deadlock detection algorithm itself becomes a bottleneck. Set innodb_deadlock_detect = OFF and rely on innodb_lock_wait_timeout for rollback. Only recommended for extreme load scenarios.
3. Lock Wait Timeout: When innodb_deadlock_detect = OFF, transactions waiting for locks will eventually timeout based on innodb_lock_wait_timeout (default 50 seconds). Combine with application retry logic.
4. Foreign Key Cascades: Foreign key constraints can cause unexpected implicit locks. Review FK relationships and consider ON DELETE SET NULL vs ON DELETE CASCADE carefully.
5. Gap Locks and Next-Key Locks: REPEATABLE READ isolation uses gap locks to prevent phantom reads. This can increase deadlock probability. Understanding lock granularity helps optimize queries:
- Primary key locks: specific rows
- Unique index locks: specific rows
- Non-unique index locks: may include gap locks
6. SELECT...FOR UPDATE Behavior: Using FOR UPDATE or FOR SHARE on SELECT statements acquires locks immediately, even if no UPDATE follows. Use sparingly and only when necessary.
7. Monitoring with Performance Schema: Enable performance_schema for detailed locking information:
SELECT * FROM performance_schema.data_locks;
SELECT * FROM performance_schema.data_lock_waits;8. Batch Operations and Lock Granularity: Breaking large updates into smaller batches can reduce deadlock risk by reducing the number of rows locked simultaneously. However, this trades safety (more transactions) for lower lock duration.
9. Connection Pooling Implications: Connection pools may not properly handle deadlock retries across pool boundaries. Implement retry logic at the application layer, not the pool layer.
10. Deterministic Testing: Deadlocks are inherently non-deterministic and hard to reproduce. Use load testing tools to simulate concurrent transactions and verify retry logic.
EE_WRITE (3): Error writing file
How to fix "EE_WRITE (3): Error writing file" in MySQL
CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters
How to fix "CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters" in MySQL
CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed
How to fix "CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed" in MySQL
ERROR 1146: Table 'database.table' doesn't exist
How to fix "ERROR 1146: Table doesn't exist" in MySQL
ERROR 1040: Too many connections
How to fix "ERROR 1040: Too many connections" in MySQL