MySQL error 20 (EE_DISK_FULL) occurs when the server runs out of disk space while attempting to write data. This error blocks write operations while read-only queries continue, and can occur during INSERT, ALTER TABLE, or binary log operations. Recovery requires immediate free disk space.
MySQL Error 20 (EE_DISK_FULL) indicates that the MySQL server encountered a "no space left on device" condition while writing to disk. This is a critical server-level error that occurs when the underlying storage partition becomes completely full. When this happens, MySQL cannot: - Write new data to the InnoDB tablespace - Flush InnoDB pages to disk - Write redo logs - Create temporary tables for complex queries - Write to binary logs (critical for replication) MySQL's response is to check the disk every minute and retry automatically. However, any transaction requiring disk space will block indefinitely until space becomes available. Read-only SELECT queries that don't require temporary files can continue, but all writes are frozen. This error became more prominent and standardized in MySQL 8.0.13+ with the EE_DISK_FULL_WITH_RETRY_MSG variant, which includes more detailed retry information in the error message.
Close any application connections attempting to write:
-- From another terminal/connection, view active processes
SHOW PROCESSLIST;
-- Kill connections stuck on writes (replace process_id)
KILL QUERY process_id; -- Terminates only the query
KILL CONNECTION process_id; -- Terminates the whole connectionThis prevents new transactions from queuing and getting stuck. You can continue reading from the database.
Find which partition and directory is full:
# Check partition usage
df -h
# If MySQL data is on separate partition, check it specifically
# Linux/Unix
du -sh /var/lib/mysql/*
# Check MySQL data directory path
mysql -u root -p -e "SELECT @@datadir;"
# Find large files in data directory
find /var/lib/mysql -type f -size +1G -exec ls -lh {} \;Look for:
- Bloated ibdata1 file (InnoDB shared tablespace)
- Large .ib_logfile0/.ib_logfile1 files (redo logs)
- Accumulating binary log files (mysql-bin.000001, etc.)
- Large temporary tables in #sql-* directories
Binary logs are often the quickest space to reclaim:
-- Check binary logs
SHOW BINARY LOGS;
-- On replication master: find logs already applied by all replicas
SHOW SLAVE HOSTS; -- View connected replicas
-- Purge logs safely (keep the one currently in use)
-- Example: keep logs from mysql-bin.000010 onwards
PURGE BINARY LOGS TO 'mysql-bin.000010';
-- Or purge by date (delete logs older than specific date)
PURGE BINARY LOGS BEFORE '2025-01-01';Important: On a replication master, verify all replicas have applied logs before purging:
-- On replica, check which master log is applied
SHOW SLAVE STATUS \G
-- Look for Master_Log_File and Read_Master_Log_PosReclaim disk space faster: After purging, binary logs don't immediately return space to OS. Restart MySQL if not in critical use:
sudo systemctl restart mysqlClean up temp files MySQL created during the disk-full condition:
# Check temp directory
mysql -u root -p -e "SELECT @@tmpdir;"
# Remove MySQL temp files (be careful not to delete OS temp files)
cd /tmp # or wherever tmpdir points
ls -lah | grep -E '#sql|tmp|temp'
# Safe removal of crashed/abandoned temp tables
rm -f /tmp/#sql*
# Check InnoDB temp tablespace
# MySQL 8.0+ uses ibtmp1 file - can be large if crashed
ls -lh /var/lib/mysql/ibtmp1
# If ibtmp1 is very large, you can remove it after MySQL restart
# MySQL will recreate itAlso check for other space-consuming files:
# Find all large files in data directory
find /var/lib/mysql -type f -size +500M
# Archive or delete old backups
ls -lah /backups/For a permanent solution, increase storage capacity:
Option 1: Expand existing partition (if possible)
# LVM expansion (if using LVM)
# First, extend the volume
sudo lvextend -L +100G /dev/mapper/vg0-lv_data
# Resize the filesystem
sudo resize2fs /dev/mapper/vg0-lv_data
# Verify
df -hOption 2: Move data directory to larger mount
# Stop MySQL
sudo systemctl stop mysql
# Create new directory on larger partition
sudo mkdir -p /large_disk/mysql_data
sudo chown mysql:mysql /large_disk/mysql_data
sudo chmod 700 /large_disk/mysql_data
# Copy data
sudo cp -av /var/lib/mysql/* /large_disk/mysql_data/
# Update MySQL config
sudo nano /etc/mysql/mysql.cnf
# Change: datadir = /large_disk/mysql_data
# Fix permissions
sudo chown -R mysql:mysql /large_disk/mysql_data
# Start MySQL
sudo systemctl start mysqlOption 3: Cloud database auto-scaling
If using AWS RDS, Google Cloud SQL, or Azure Database for MySQL, enable automatic storage scaling:
- AWS RDS: Enable "Storage autoscaling" (auto-extends when reaching threshold)
- Google Cloud SQL: Enable "Automatic storage increase"
- Azure: Monitor storage and scale manually or via automation
Set up automatic binary log purification to prevent recurrence:
-- Check current setting
SHOW VARIABLES LIKE 'expire_logs_days';
SHOW VARIABLES LIKE 'binlog_expire_logs_seconds';
-- Set expiration (MySQL 8.0.1+, more precise)
SET GLOBAL binlog_expire_logs_seconds = 604800; -- 7 days
-- For older MySQL versions use:
SET GLOBAL expire_logs_days = 7;
-- Make it permanent in /etc/mysql/mysql.cnf
[mysqld]
binlog_expire_logs_seconds = 604800 # 7 days
# OR (older versions)
expire_logs_days = 7After restart, binary logs older than 7 days purge automatically:
sudo systemctl restart mysqlOnce sufficient space is available, monitor recovery:
-- Check if disk-full message stops appearing
-- Watch error log in real-time (Linux)
tail -f /var/log/mysql/error.log
-- Verify stuck transactions resume
SHOW PROCESSLIST; -- Blocked queries should complete
-- Check InnoDB status
SHOW ENGINE INNODB STATUS \G
-- Should show normal activity, not disk-full messages
-- Once recovered, test a simple write
INSERT INTO test_table VALUES (1);MySQL checks disk availability every minute, so transactions should resume automatically without restart once space is available.
The Dummy File Trick:
Advanced DBAs use a clever recovery technique: create a large placeholder file (1-2GB) in the data directory beforehand. When disk becomes full, delete the dummy file to instantly free space, giving you time to:
- Kill blocking transactions
- Purge logs
- Migrate data
# Create 1GB dummy file in MySQL data directory
dd if=/dev/zero of=/var/lib/mysql/placeholder.tmp bs=1M count=1024
# When disk is full, delete it to free 1GB instantly
rm /var/lib/mysql/placeholder.tmpHandling REPAIR TABLE and OPTIMIZE TABLE:
These DDL operations create large temporary files and may behave differently during disk-full conditions:
- REPAIR TABLE and OPTIMIZE TABLE may remove their temp files and mark the table as crashed if disk fills
- Use pt-online-schema-change instead for safer alterations
- Always test DDL operations on replicas first
Replication Impact:
If binary logs are purged before replicas apply them:
- Replica stops replication with "Could not find first log file name"
- Must rebuild replica from backup and replication stream
- Always verify replica position before purging on master
InnoDB Temporary Files (ibtmp1):
In MySQL 8.0+, temporary tables use a dedicated ibtmp1 file:
- Can grow to gigabytes if complex queries create large temp tables
- Safely removable after MySQL restart (MySQL recreates it)
- Consider increasing innodb_temp_data_file_path to separate partition
Monitoring and Alerts:
Implement proactive monitoring to prevent disk-full:
# Monitor free space with cron job
0 */6 * * * bash -c 'echo "Free space:" && df -h /var/lib/mysql'
# Alert if below threshold (e.g., 10% free)
THRESHOLD=10
USED_PCT=\$(df /var/lib/mysql | tail -1 | awk '{print \$5}' | cut -d'%' -f1)
if [ \$USED_PCT -gt \$((100 - THRESHOLD)) ]; then
send_alert "MySQL disk usage critical: \${USED_PCT}%"
fiCloud Provider Considerations:
- AWS RDS: Use CloudWatch alarms for Free Storage Space metric
- Google Cloud SQL: Enable automatic storage increase
- Azure: Set up metric alerts for storage consumption
- Always test backup/restore procedure to ensure new storage is sufficient
EE_WRITE (3): Error writing file
How to fix "EE_WRITE (3): Error writing file" in MySQL
CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters
How to fix "CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters" in MySQL
CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed
How to fix "CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed" in MySQL
ERROR 1146: Table 'database.table' doesn't exist
How to fix "ERROR 1146: Table doesn't exist" in MySQL
ERROR 1040: Too many connections
How to fix "ERROR 1040: Too many connections" in MySQL