MySQL ERROR 1183 occurs when InnoDB fails during a checkpoint operation, which flushes modified data from memory to disk. This critical error usually indicates disk I/O problems, insufficient disk space, file system issues, or hardware failures. Resolve it by checking disk health, freeing space, restarting MySQL, and investigating the error log for the underlying cause.
MySQL ERROR 1183 (ER_ERROR_DURING_CHECKPOINT) is triggered when the InnoDB storage engine encounters a failure during a checkpoint operation. A checkpoint is an internal process where InnoDB flushes modified database pages from the buffer pool to disk, ensuring data durability and consistency. This is a critical operation that occurs periodically during normal database operation. When a checkpoint fails, it usually indicates one of these issues: the disk is full or nearly full, the file system has errors or became read-only, the disk is experiencing I/O errors due to hardware failure, or there are file permission problems preventing writes. The error message includes an additional error code (shown as "%d" in the generic message) that identifies the specific operating system error that caused the checkpoint to fail. Unlike some errors that recover automatically, checkpoint failures are serious because they prevent InnoDB from ensuring that recent changes are safely written to disk. If the server crashes after a failed checkpoint without successful recovery, data loss or corruption could occur.
The most common cause of checkpoint failures is insufficient disk space. Check all partitions, especially those containing MySQL data, logs, and temporary files.
df -hLook for partitions showing high usage (>90%) or full (100%). For example:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 100G 95G 5.0G 95% /
/dev/sda2 50G 48G 2.0G 96% /var/lib/mysql
tmpfs 16G 15G 1.0G 94% /tmpAlso check the MySQL data directory specifically:
du -sh /var/lib/mysql
du -sh /var/log/mysql
du -sh /tmpIf any partition is >95% full, you must free space before MySQL can recover.
If disk is the issue, immediately free space by removing old binary logs, temporary files, and unnecessary data.
Stop MySQL first to prevent further operations:
sudo systemctl stop mysqlRemove old binary logs (these are safe to delete if backed up):
# Find and remove old binary logs (older than 30 days)
sudo find /var/log/mysql -name 'mysql-bin.*' -mtime +30 -delete
# Remove old error logs
sudo find /var/log -name 'error.log*' -mtime +30 -delete
# Clear /tmp directory
sudo rm -rf /tmp/*Check disk space again:
df -hYou should have at least 10% free space (ideally 20%) for MySQL to operate safely.
Once you have freed disk space, restart MySQL and watch the error log for signs of the checkpoint failure repeating.
sudo systemctl start mysqlMonitor the error log in real-time:
sudo tail -f /var/log/mysql/error.logLook for lines like:
[ERROR] [MY-012651] [InnoDB] Got error -1 during CHECKPOINT
[ERROR] [MY-012651] [InnoDB] Got error 28 during CHECKPOINTThe error code after "error" provides clues:
- Error 28 (ENOSPC): No space left on device
- Error 13 (EACCES): Permission denied
- Error 5 (EIO): Input/output error (hardware problem)
- Error 24 (EMFILE): Too many open files
If the error persists after freeing space, investigate the specific error code.
If disk space is available but the error continues, check the file system for corruption and verify MySQL permissions.
Check file system for errors:
# Check the file system (requires unmounting or running at boot)
sudo fsck -n /dev/sda2 # -n flag runs read-only check without repairs
# Or use e2fsck for ext4
sudo e2fsck -n /dev/sda2If errors are found and you need to repair, you must stop MySQL and unmount the file system first (this requires downtime).
Check MySQL data directory permissions:
ls -ld /var/lib/mysql
ls -l /var/lib/mysql/ib_logfile*
ls -l /var/lib/mysql/ibdata*All files should be owned by the mysql user and group. Fix permissions if needed:
sudo chown -R mysql:mysql /var/lib/mysql
sudo chmod 750 /var/lib/mysql
sudo chmod 660 /var/lib/mysql/ibdata*
sudo chmod 660 /var/lib/mysql/ib_logfile*Restart MySQL:
sudo systemctl restart mysqlIf the error persists after freeing space and fixing permissions, investigate hardware-level disk failures.
Check the system log for I/O errors:
sudo dmesg | grep -i error | tail -20
sudo journalctl -p err -n 50Look for messages like:
[Hardware Error]: Machine check from unknown source
I/O error dev sda, sector 12345
Buffer I/O error on device sda1Run disk diagnostics:
# Check disk SMART status (if supported)
sudo smartctl -a /dev/sda
# Run read/write tests (requires downtime)
sudo badblocks -v /dev/sda1If SMART reports failing disks or many read errors, the disk is likely failing and needs replacement. This is a hardware issue, not a MySQL configuration problem.
For cloud environments, check the hypervisor logs and request disk replacement from your provider.
If your database is legitimately large and growing, configure MySQL to use larger log files and expand storage capacity.
Increase InnoDB log file size in /etc/mysql/my.cnf:
[mysqld]
# Increase log file size (default is usually 48MB)
# Value is per log file, and there are typically 2 log files
innodb_log_file_size = 1G
# Increase buffer pool if server has sufficient RAM
innodb_buffer_pool_size = 8GBefore changing log file size, you must stop MySQL and rename/remove the old log files:
sudo systemctl stop mysql
cd /var/lib/mysql
sudo mv ib_logfile0 ib_logfile0.bak
sudo mv ib_logfile1 ib_logfile1.bak
sudo systemctl start mysqlFor storage expansion in cloud environments:
# AWS EBS example
sudo resize2fs /dev/xvda1
# Azure managed disk example
sudo parted /dev/sda
# Type: resizepart
# Number: 1
# End: 100%
sudo resize2fs /dev/sda1For physical servers, add additional storage or migrate to a larger volume:
sudo systemctl stop mysql
sudo mv /var/lib/mysql /mnt/large_disk/mysql
# Update /etc/mysql/my.cnf: datadir = /mnt/large_disk/mysql
sudo chown -R mysql:mysql /mnt/large_disk/mysql
sudo systemctl start mysqlImplement monitoring to prevent checkpoint failures from happening again.
Set up disk space monitoring:
# Add to crontab to check every 2 hours
0 */2 * * * df -h | awk 'NR>1 {if ($5+0 > 80) print "Disk Alert: " $6 " is " $5 " full"}' | mail -s "MySQL Disk Alert" [email protected]Monitor the MySQL error log for checkpoint errors:
# Add to crontab to check every 4 hours
0 */4 * * * grep "CHECKPOINT" /var/log/mysql/error.log | tail -100 | mail -s "MySQL Checkpoint Report" [email protected]Use cloud-native monitoring:
# AWS CloudWatch, Azure Monitor, or similar services
# Set alerts for:
# - Disk usage > 80%
# - InnoDB pages not flushed
# - MySQL errors per minute > thresholdKeep at least 20% disk space free at all times. Configure log rotation and purge to prevent log files from consuming all space:
# Configure logrotate for MySQL error logs
# Edit /etc/logrotate.d/mysql-server
/var/log/mysql/error.log {
daily
rotate 10
compress
delaycompress
notifempty
create 0660 mysql mysql
sharedscripts
postrotate
systemctl reload mysql > /dev/null 2>&1 || true
endscript
}InnoDB Crash Recovery: If MySQL crashed immediately after error 1183, InnoDB will run crash recovery on startup to verify data consistency. This can take several minutes or longer depending on the database size. Monitor the error log during this process:
grep "InnoDB: Recovering" /var/log/mysql/error.logFuzzy Checkpointing: InnoDB uses "fuzzy checkpointing" where it flushes modified pages in small batches continuously, rather than in one massive operation. This means checkpoint failures may not immediately prevent all queries, but they do prevent data durability guarantees. A persistent checkpoint failure indicates the underlying I/O problem is worsening.
innodb_force_recovery: If MySQL won't start due to checkpoint recovery failures, you can set innodb_force_recovery to a higher level (1-6) to bypass recovery:
[mysqld]
innodb_force_recovery = 3However, values >3 risk data loss. Use only as a temporary measure to dump data, then restore to a clean instance.
Replication and Error 1183: On a replication replica, error 1183 in the relay logs will cause the replica to stop and wait for the checkpoint issue to resolve. Fix the checkpoint error, then restart replication:
RESET SLAVE;
START SLAVE;Network Storage (NAS) Issues: If MySQL data is on NAS, error 1183 often indicates NAS network timeouts or disconnection. Verify NAS connectivity and increase NFS/SMB timeout settings. Consider moving to local storage if NAS is unreliable.
Temperature and Hardware Aging: Extended high server CPU/memory usage during heavy writes can cause thermal throttling or trigger protective mechanisms. Monitor CPU temperatures and server health during the checkpoint failures:
sensors # Show temperature sensors
top # Show CPU/memory usage during checkpointEE_WRITE (3): Error writing file
How to fix "EE_WRITE (3): Error writing file" in MySQL
CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters
How to fix "CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters" in MySQL
CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed
How to fix "CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed" in MySQL
ERROR 1146: Table 'database.table' doesn't exist
How to fix "ERROR 1146: Table doesn't exist" in MySQL
ERROR 1040: Too many connections
How to fix "ERROR 1040: Too many connections" in MySQL