The MySQL client runs out of memory while processing a query result set or allocating memory for operations. This error occurs when the result set is too large for available client memory, the client application is 32-bit, or when memory is poorly configured.
ERROR 2008 indicates the client-side MySQL process has exhausted available memory. Unlike server-side memory errors, this happens on the machine running the MySQL client (such as mysql CLI tool, Workbench, ODBC connector, or application server). The error usually includes details like "needed 8136 byte (8k), memory in use: 12481367 bytes (12189k)". Common triggers include huge result sets (60,000+ rows with many columns), 32-bit application limitations, or memory limits imposed by the operating system. This is distinct from server "out of memory" errors and requires client-side solutions.
First, review whether the query is correct and whether you truly need all rows. Run:
SELECT COUNT(*) FROM your_table WHERE your_conditions;If this returns 100,000+ rows, you likely need to add filters or pagination. Fewer rows = less memory on client side.
The --quick (or -q) flag tells the mysql client to use streaming result retrieval instead of buffering the entire result set:
mysql -h localhost -u user -p --quick < large_query.sql > output.txtThis uses mysql_use_result() which fetches rows on-demand rather than all at once. This dramatically reduces client-side memory usage.
Instead of retrieving results into the client, save directly to the server file system:
SELECT * FROM large_table
INTO OUTFILE '/var/lib/mysql-files/output.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';Then retrieve the file over SSH/SCP. This bypasses client memory entirely by processing on the server. Note: The file must be saved in the MySQL data directory (secure_file_priv setting).
Break large queries into smaller chunks:
SELECT * FROM large_table
WHERE id > 0
LIMIT 10000;
-- Then in next iteration:
SELECT * FROM large_table
WHERE id > 10000
LIMIT 10000;Or use offset-based pagination:
SELECT * FROM large_table LIMIT 10000 OFFSET 0;
SELECT * FROM large_table LIMIT 10000 OFFSET 10000;Process results in batches of 10,000-50,000 rows per query.
Check if your MySQL client is 32-bit:
# On Linux
file $(which mysql)
# Output: ELF 32-bit LSB shared object = 32-bit
# On Windows
# Check MySQL Workbench: Help > About64-bit clients can address much more memory (terabytes vs 2-4 GB). Install the 64-bit version of your MySQL client, Workbench, or database driver:
# Linux
sudo apt install mysql-client # defaults to 64-bit on modern systems
# macOS
brew install mysql-client
# Windows
# Download MySQL Community Server 64-bit installer from dev.mysql.comIf using ODBC (Excel, Power BI, certain BI tools), adjust result caching:
In ODBC Data Source Administrator (Windows) or .odbc.ini (Linux/macOS):
[MySQL ODBC Driver]
CURSOR_TYPE=UseOdbc
USE_RESULT=1
PREFETCH=1000Or in some drivers, enable:
- "Don't cache results of forward-only cursors"
- "Force use of forward-only cursors"
These settings use streaming instead of buffering the entire result set.
Check and increase the memory limit your OS allows for the mysql process:
# Check current limits
ulimit -a
# Increase heap size (temporary, until reboot)
ulimit -v unlimited
# For persistent changes, edit /etc/security/limits.conf (Linux):
* hard memlock unlimited
* soft memlock unlimitedHowever, this is a temporary workaround. The root issue is the query/result set being too large.
If backing up or exporting data, mysqldump with --quick streams results:
mysqldump -h localhost -u user -p --quick --single-transaction \
--routines --triggers large_database > backup.sqlThe --quick flag (default with mysqldump) uses mysql_use_result() to avoid buffering. Always include --single-transaction for InnoDB to maintain consistency without table locks.
This error is client-side, not server-side. The MySQL daemon itself has memory available, but the client process (whether mysql CLI, Workbench, an ODBC driver, or your application) ran out. This is different from ERROR 1037 ("Out of memory; restart server"), which means the server itself exhausted memory. For large data operations (terabytes of backup/restore), consider using non-traditional approaches: MySQL Shell's dump utilities with parallel threading, Percona Backup for MySQL (PBM) which is more efficient than mysqldump, or cloud-native solutions like AWS Database Migration Service. If this error occurs in application code (Node.js, Python, PHP), check if your connection pool or database driver is configured for streaming results rather than buffering. For example, in Node.js mysql2, use connection.query() with streaming instead of connection.execute() with all results.
EE_WRITE (3): Error writing file
How to fix "EE_WRITE (3): Error writing file" in MySQL
CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters
How to fix "CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters" in MySQL
CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed
How to fix "CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed" in MySQL
ERROR 1146: Table 'database.table' doesn't exist
How to fix "ERROR 1146: Table doesn't exist" in MySQL
ERROR 1040: Too many connections
How to fix "ERROR 1040: Too many connections" in MySQL