MySQL ERROR 1593 indicates replication failure caused by a corrupted relay log file. The relay log stores replication data from the source server, and corruption typically results from hardware failures or unexpected server shutdowns. Fix by resetting the replica and resyncing from the source.
ERROR 1593 "Error reading relay log" occurs when MySQL replica (slave) cannot read its relay log file during replication. The relay log is a temporary binary log on the replica that stores changes received from the source (master) server before they're applied to the replica's database. When the relay log becomes corrupted—typically due to disk I/O errors, hardware failures, or abrupt server shutdowns—the SQL thread cannot continue processing replication events. This is a replication-critical error that stops the replica from applying changes, causing data divergence from the source. The error occurs specifically when the SQL thread tries to read and parse a relay log event to apply it to the replica's database. Unlike the I/O thread (which reads from the source), the SQL thread operates locally and will fail immediately if the relay log is unreadable.
Run mysqlbinlog on the source server to ensure its binary log is healthy. Log in to the source server and check the binary log file:
# On the source server
mysqlbinlog /path/to/binary/log/mysql-bin.XXXXXX | head -50If mysqlbinlog fails with errors, the source's binary log is corrupted and must be recovered first before attempting to fix the replica.
If mysqlbinlog succeeds, proceed to the next step.
Before resetting the replica, capture the current replication position. Connect to the replica and run:
SHOW REPLICA STATUS\G(Or SHOW SLAVE STATUS\G on older MySQL versions)
Note these values—they may be useful for troubleshooting:
- Relay_Source_Log_File (or Relay_Master_Log_File): The binary log on the source
- Exec_Source_Log_Pos (or Exec_Master_Log_Pos): The last position executed
These will help you determine exactly where replication was interrupted.
Stop both the IO and SQL threads to prevent further errors:
STOP REPLICA;(Or STOP SLAVE; on older MySQL versions)
Verify both threads are stopped:
SHOW REPLICA STATUS\GConfirm that both Slave_IO_Running and Slave_SQL_Running are NO.
Reset the replica to clear all replication metadata and remove corrupted relay log files:
RESET REPLICA;(Or RESET SLAVE; on older MySQL versions)
This command:
- Clears the master.info and relay-log.info metadata repositories
- Deletes ALL relay log files
- Creates a fresh relay log for new replication
This is safe because the relay log is temporary storage; all replication data can be re-fetched from the source.
Reconnect the replica to the source using the binary log position you noted earlier:
CHANGE REPLICATION SOURCE TO
SOURCE_HOST='source-server-hostname',
SOURCE_USER='replication-user',
SOURCE_PASSWORD='replication-password',
SOURCE_LOG_FILE='mysql-bin.XXXXXX',
SOURCE_LOG_POS=XXXXXXXXX;Replace:
- source-server-hostname: Hostname/IP of the source
- mysql-bin.XXXXXX: The Relay_Source_Log_File from step 2
- XXXXXXXXX: The Exec_Source_Log_Pos from step 2
- replication-user and password with your replication credentials
For older MySQL versions, use CHANGE MASTER TO instead.
If you're using GTIDs, use:
CHANGE REPLICATION SOURCE TO
SOURCE_HOST='source-server-hostname',
SOURCE_USER='replication-user',
SOURCE_PASSWORD='replication-password',
SOURCE_AUTO_POSITION=1;Restart the replication threads:
START REPLICA;(Or START SLAVE; on older MySQL versions)
Check the replication status:
SHOW REPLICA STATUS\GMonitor these key fields:
- Slave_IO_Running: Should be YES (replica IO thread is reading from source)
- Slave_SQL_Running: Should be YES (replica SQL thread is applying changes)
- Seconds_Behind_Master: Should decrease toward 0 as replication catches up
- Last_Error: Should be empty if replication is healthy
Watch the error log for any new errors:
# Check MySQL error log
tail -f /var/log/mysql/error.log | grep -i errorReplication will resync from the source and rebuild the relay log. The process time depends on how far behind the replica was.
ARM Platform Considerations: MySQL 8.0 on ARM architecture sometimes encounters transient relay log read errors due to ARM's weak memory consistency. These errors may resolve on their own after restarting the SQL thread. If the issue persists, follow the full reset procedure above.
Relay Log Recovery: MySQL 5.7.3+ supports automatic relay log recovery with the relay_log_recovery system variable. Setting relay_log_recovery=1 in my.cnf enables automatic recovery from corrupted relay logs on startup. However, manual reset is more reliable for severe corruption.
GTID vs Binlog Position: If using GTIDs (Global Transaction Identifiers), the reset process is simpler—you can use SOURCE_AUTO_POSITION=1 to avoid calculating exact binlog positions. This is recommended for modern MySQL setups.
Prevention: Configure relay_log and relay_log_index system variables explicitly in my.cnf to use fixed file names independent of hostname changes. This prevents replication confusion if the replica hostname changes in DHCP environments.
Full Resync Alternative: If you suspect other data inconsistencies beyond relay log corruption, perform a full backup of the source and restore it on the replica, then restart replication from the current position. This is more time-consuming but ensures complete data integrity.
EE_WRITE (3): Error writing file
How to fix "EE_WRITE (3): Error writing file" in MySQL
CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters
How to fix "CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters" in MySQL
CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed
How to fix "CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed" in MySQL
ERROR 1146: Table 'database.table' doesn't exist
How to fix "ERROR 1146: Table doesn't exist" in MySQL
ERROR 1040: Too many connections
How to fix "ERROR 1040: Too many connections" in MySQL