This MySQL client error occurs when the client process runs out of available memory while fetching or processing query results. The error typically appears when executing queries that return extremely large result sets, processing large BLOB columns, or when the client application has insufficient memory allocation. Breaking queries into smaller chunks, increasing memory limits, or using unbuffered result fetching resolves this issue.
The CR_OUT_OF_MEMORY (2008) error is a MySQL client error that occurs when the client-side process cannot allocate enough memory to handle query results. Unlike server-side out-of-memory errors, this is a client-side issue. Key aspects of this error: 1. **Client Memory Limits**: The MySQL client (mysql command, mysqli, PDO, MySQL connectors) attempts to allocate memory for storing query results. When the requested size exceeds available memory, the error occurs. 2. **Result Set Fetching**: By default, MySQL clients fetch entire result sets into memory at once. A query returning 100 million rows would try to allocate memory for all rows simultaneously, triggering the error. 3. **Memory Allocation Patterns**: - **Buffered Results**: Client fetches all rows and stores them in memory. This is the default behavior in most MySQL libraries. - **Unbuffered Results**: Client fetches rows one at a time, consuming less memory. Only one row is in memory at any time. 4. **Common Triggers**: - Large SELECT queries without LIMIT clauses - Fetching all columns from wide tables with many rows - Processing large BLOB/TEXT columns - Joining large tables that produce enormous result sets - Exporting entire tables without pagination - Stored procedures returning huge result sets 5. **System vs Application Memory**: The error can be triggered by actual system memory shortage OR by process-level memory limits (ulimit, cgroups) being lower than the memory requirement. 6. **Difference from "Out of Memory" (1037)**: Error 1037 is a server-side error when the MySQL server itself runs out of memory. Error 2008 is specifically a client-side issue.
First, verify available system memory on the client machine:
# Linux - Check available RAM
free -h
free -m
# Detailed memory breakdown
cat /proc/meminfo
# Check process memory limits
ulimit -a
ulimit -v # Virtual memory limit
ulimit -m # Physical memory limit
# macOS
vm_stat
memory_pressure
# Windows
wmic OS get TotalVisibleMemorySize,FreePhysicalMemoryNote the output:
- free shows total, used, and available memory
- ulimit -v shows memory limit (if 0, unlimited)
- If total free memory is much less than the query result size, that's the problem
Estimate how large the result set actually is:
-- Get row count and average row size
SELECT
COUNT(*) as total_rows,
AVG(LENGTH(CAST(CONCAT_WS(',', col1, col2, col3) AS CHAR))) as avg_row_bytes,
COUNT(*) * AVG(LENGTH(CAST(CONCAT_WS(',', col1, col2, col3) AS CHAR))) / 1024 / 1024 as approx_result_mb
FROM your_large_table;
-- Or use information_schema for BLOB columns
SELECT
table_name,
COUNT(*) as row_count,
ROUND(SUM(octet_length) / 1024 / 1024, 2) as size_mb
FROM (
SELECT
'table_name' as table_name,
OCTET_LENGTH(blob_column) as octet_length
FROM your_table
) t
GROUP BY table_name;
-- For tables with BLOB columns, check max/min sizes
SELECT
MIN(OCTET_LENGTH(blob_column)) as min_size_bytes,
MAX(OCTET_LENGTH(blob_column)) as max_size_bytes,
AVG(OCTET_LENGTH(blob_column)) as avg_size_bytes,
COUNT(*) as row_count,
COUNT(*) * AVG(OCTET_LENGTH(blob_column)) / 1024 / 1024 as total_approx_mb
FROM your_table;This shows if the result set is truly larger than available memory.
The simplest fix is to add LIMIT clauses to prevent fetching all rows at once:
-- Instead of:
SELECT * FROM large_table;
-- Use:
SELECT * FROM large_table LIMIT 1000;
-- For pagination, use OFFSET:
SELECT * FROM large_table LIMIT 1000 OFFSET 0; -- First page
SELECT * FROM large_table LIMIT 1000 OFFSET 1000; -- Second page
SELECT * FROM large_table LIMIT 1000 OFFSET 2000; -- Third pageFor batch processing, iterate through results:
// PHP example - pagination loop
$pageSize = 1000;
$offset = 0;
while (true) {
$query = "SELECT * FROM large_table LIMIT $pageSize OFFSET $offset";
$result = $mysqli->query($query);
if ($result->num_rows === 0) {
break; // No more rows
}
while ($row = $result->fetch_assoc()) {
// Process one row at a time
process_row($row);
}
$offset += $pageSize;
}# Python example
import mysql.connector
db = mysql.connector.connect(...)
cursor = db.cursor()
page_size = 1000
offset = 0
while True:
query = f"SELECT * FROM large_table LIMIT {page_size} OFFSET {offset}"
cursor.execute(query)
rows = cursor.fetchall()
if not rows:
break
for row in rows:
process_row(row)
offset += page_size// Node.js example
const mysql = require('mysql2/promise');
const connection = await mysql.createConnection({...});
const pageSize = 1000;
let offset = 0;
while (true) {
const [rows] = await connection.execute(
'SELECT * FROM large_table LIMIT ? OFFSET ?',
[pageSize, offset]
);
if (rows.length === 0) break;
for (const row of rows) {
processRow(row);
}
offset += pageSize;
}Use unbuffered result fetching to process rows one at a time instead of loading all at once:
// PHP mysqli - unbuffered query
$mysqli->real_query("SELECT * FROM large_table");
$result = $mysqli->use_result(); // Unbuffered fetch
while ($row = $result->fetch_assoc()) {
// Process one row at a time - memory efficient
process_row($row);
}
// Compare to buffered (default):
$result = $mysqli->query("SELECT * FROM large_table"); // Buffered
while ($row = $result->fetch_assoc()) {
// All rows loaded in memory before loop
process_row($row);
}# Python MySQLdb - SSCursor (Server-Side Cursor)
import MySQLdb
from MySQLdb.cursors import SSCursor
db = MySQLdb.connect(...)
# Standard cursor - buffered (may cause memory error)
# cursor = db.cursor()
# SSCursor - unbuffered (memory efficient)
cursor = db.cursor(SSCursor)
cursor.execute("SELECT * FROM large_table")
for row in cursor:
# Fetches rows one at a time
process_row(row)// Node.js mysql2 - streaming
const mysql = require('mysql2');
const connection = mysql.createConnection({...});
const query = "SELECT * FROM large_table";
connection
.query(query)
.on('result', function(row) {
// Handle each row as it arrives
process_row(row);
})
.on('error', function(err) {
console.error(err);
})
.on('end', function() {
// Done
console.log('All rows processed');
});# Ruby mysql2 - streaming
db = Mysql2::Client.new(host: 'localhost', username: 'root')
db.query("SELECT * FROM large_table").each do |row|
# Stream_each_row processes one row at a time
process_row(row)
endAllocate more memory to the MySQL client process:
# Linux - Increase virtual memory limit for current session
ulimit -v unlimited
# Or set permanent limit in /etc/security/limits.conf
# Add line:
# username soft memlock unlimited
# username hard memlock unlimited
# Then restart shell
# For system services, set in systemd unit
sudo systemctl edit service_name
# Add:
[Service]
MemoryLimit=4G
MemoryMax=4G
sudo systemctl daemon-reload
sudo systemctl restart service_name# For PHP
# Edit php.ini
php_value memory_limit 512M
# Or set at runtime
ini_set('memory_limit', '512M');
# Check current limit
echo ini_get('memory_limit');# For Java applications using MySQL JDBC
# Set JVM heap size
java -Xmx2G -Xms1G -jar application.jar
# Or in JAVA_OPTS environment variable
export JAVA_OPTS="-Xmx2G -Xms1G"# For Python
# Run with explicit memory
python script.py
# Or use resource module
import resource
resource.setrlimit(resource.RLIMIT_AS, (4*1024*1024*1024, 4*1024*1024*1024))Note: Increasing memory limits is a band-aid solution. The root cause (unbounded queries) should still be fixed.
Reduce result set size through query optimization:
-- Only select needed columns
SELECT id, name, email FROM large_table;
-- Instead of SELECT *
-- Add WHERE clause to filter
SELECT * FROM large_table WHERE status = 'active';
-- Instead of all records
-- Use appropriate indexes
CREATE INDEX idx_status ON large_table(status);
CREATE INDEX idx_created ON large_table(created_at);
-- Check query plan
EXPLAIN SELECT * FROM large_table WHERE status = 'active' LIMIT 10;
-- Filter before JOINing large tables
SELECT a.*, b.*
FROM table_a a
JOIN table_b b ON a.id = b.a_id
WHERE a.created_at > DATE_SUB(NOW(), INTERVAL 30 DAY);
-- Instead of joining all rows then filtering
-- Avoid SELECT * with BLOB columns
SELECT id, name FROM large_table;
-- Instead of SELECT * when you don't need the BLOB column
-- For aggregations, reduce rows
SELECT category, COUNT(*) FROM table GROUP BY category;
-- Instead of fetching all rows to count in applicationCheck query performance:
-- Enable query profiling
SET profiling = 1;
SELECT * FROM large_table LIMIT 10;
SHOW PROFILES;
SHOW PROFILE FOR QUERY 1;
-- Check slow query log
SHOW VARIABLES LIKE 'slow_query_log%';
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 2;For large exports/imports, use specialized tools instead of loading all data in application memory:
# mysqldump with chunking
mysqldump --single-transaction --quick --lock-tables=false \
-h localhost -u root -p database table_name > backup.sql
# For very large tables, use --where to chunk:
mysqldump --single-transaction --quick \
-h localhost -u root -p database table_name \
--where="id >= 1 AND id < 100000" > chunk1.sql
mysqldump --single-transaction --quick \
-h localhost -u root -p database table_name \
--where="id >= 100000 AND id < 200000" > chunk2.sql# Using mysql command with pager for streaming output
mysql -h localhost -u root -p -e "SELECT * FROM large_table" | \
sed '1d' | \
split -l 10000 - output_
# This streams output instead of buffering# mysqlsh (MySQL Shell) supports streaming
mysqlsh --uri user@localhost/database \
--execute "SELECT * FROM large_table LIMIT 1000"# Use load_infile for imports (faster, less memory)
FLOAD DATA LOCAL INFILE '/path/to/file.csv'
INTO TABLE table_name
FIELDS TERMINATED BY ','
ENCLOSED BY '"';
# Application level with streaming
with open('large_file.csv', 'r') as f:
rows = []
for i, line in enumerate(f):
rows.append(parse_csv_line(line))
if len(rows) == 1000:
insert_batch(rows)
rows = []
if rows:
insert_batch(rows)Several advanced considerations apply to CR_OUT_OF_MEMORY errors:
1. Memory Calculation: Estimate result set size as: (row_count × average_row_size_bytes) + overhead. MySQL typically adds 5-10% overhead for internal structures. UTF-8MB4 uses 4 bytes per character (not 3), significantly increasing size.
2. Process Memory Limits: Check ulimit, cgroups (Docker), or systemd MemoryLimit. These process-level limits can be lower than actual system RAM, causing errors despite available memory. This is common in containerized environments.
3. Client Library Differences:
- PDO/mysqli (PHP): Default buffered. Use unbuffered with PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => false or mysqli->real_query() + use_result()
- Connectors: Always check documentation for buffered vs unbuffered modes
- ORMs: Framework-level pagination (Eloquent, SQLAlchemy) often handles this automatically
4. Garbage Collection Timing: In scripting languages, result sets may not be immediately freed. Explicitly free/close results:
$result->free(); // or mysqli->close()
unset($result);5. Network Packet Assembly: Client-side network libraries buffer entire packets before returning rows. With max_allowed_packet > 100MB, packets themselves consume memory before rows are even parsed.
6. JOIN Cartesian Products: A JOIN between two large unfiltered tables (200K rows × 500K rows) produces 100 billion rows. Always verify JOIN conditions:
-- Debug: check estimated rows before and after JOIN
EXPLAIN SELECT COUNT(*) FROM table_a;
EXPLAIN SELECT COUNT(*) FROM table_b;
EXPLAIN SELECT COUNT(*) FROM table_a JOIN table_b ON a.id = b.id;7. MySQL Client Options: The --quick flag in mysql CLI enables unbuffered queries. The --compress flag reduces network size but uses more CPU. Combine both for large exports: mysql --quick --compress
8. Monitoring Memory Usage: In production, monitor client process memory:
# Watch process memory in real-time
watch -n 1 "ps aux | grep mysql"
# Look at RSS (resident set size) and VSZ (virtual memory size)9. Prepared Statements: Using prepared statements doesn't inherently solve memory issues. The result set is still buffered unless using unbuffered fetching.
10. Character Encoding: VARCHAR(255) in utf8mb4 can be 1020 bytes (255 × 4). With 1 million rows, that's 1GB for a single column. Always be conscious of encoding in size calculations.
For production batch jobs, always implement pagination with LIMIT/OFFSET, even if memory seems sufficient. System conditions change, and unbounded queries are a reliability risk.
EE_WRITE (3): Error writing file
How to fix "EE_WRITE (3): Error writing file" in MySQL
CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters
How to fix "CR_PARAMS_NOT_BOUND (2031): No data supplied for parameters" in MySQL
CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed
How to fix "CR_DNS_SRV_LOOKUP_FAILED (2070): DNS SRV lookup failed" in MySQL
ERROR 1146: Table 'database.table' doesn't exist
How to fix "ERROR 1146: Table doesn't exist" in MySQL
ERROR 1040: Too many connections
How to fix "ERROR 1040: Too many connections" in MySQL