CR_SERVER_LOST (2013) is thrown when the client loses the TCP session in the middle of a query. MySQL treats the connection as aborted and reports an "Aborted connection" entry, so the client never knows whether the statement finished. Timeouts, packet limits, network drops, and a restarted server are the usual suspects.
The C API and all MySQL connectors raise CR_SERVER_LOST when the back-and-forth round trip for a query breaks unexpectedly. The server logs an "Aborted connection" message and increments Aborted_clients/Connection_errors_% counters, while the client receives the 2013 error code before a result is delivered. The most common causes are server timeouts (wait_timeout/interactive_timeout/connect_timeout), max_allowed_packet or net_write_timeout limits, network/firewall devices that cut idle sessions, or the mysqld process restarting/crashing while it is processing the query. Because the client cannot determine whether the server committed the work, the application has to both prevent premature disconnections and be prepared to retry safely.
Join the error log, general query log, and status variables to understand why the server ended the session:
SHOW VARIABLES LIKE 'log_error_verbosity';
SHOW STATUS LIKE 'Aborted_%';
SHOW GLOBAL STATUS LIKE 'Connection_errors_%';Outside the shell, run mysqladmin version to see uptime and mysqladmin variables | grep wait_timeout to check timeouts. Look for "Aborted connection" entries that mention the user/host and query ID. If you cannot reproduce the drop, briefly enable the general query log to capture the failing SQL.
Long-running queries or sleepy clients should not sit longer than the server's wait_timeout/interactive_timeout/connect_timeout. For example:
SET GLOBAL wait_timeout = 86400;
SET GLOBAL interactive_timeout = 86400;
SET GLOBAL connect_timeout = 20;Reload your connection pool so it opens new connections more frequently than the lowest timeout, and enable connection testing (e.g., SELECT 1) before handing the socket to the application. Avoid letting pooled connections sit idle past the server's limit.
If the server kills the connection while you stream large rows or BLOBs, bump the relevant limits:
SHOW VARIABLES LIKE 'max_allowed_packet';
SET GLOBAL max_allowed_packet = 67108864;
SET GLOBAL net_write_timeout = 120;
SET GLOBAL net_read_timeout = 120;Mirror the client-side settings in your connector (under [mysql] or connection attributes) so both sides allow the same packet size.
Oversized INSERT/UPDATE/LOAD DATA statements may be aborted because the query packet grows too large or the server abandons the connection. Split bulk work into smaller batches and commit between them:
INSERT INTO big_table (cols) VALUES (...), (...);
-- keep each batch to a few thousand rows or less than 64MB payload
COMMIT;Use streaming APIs (e.g., mysql.connector.cursor.execute with executemany) so the client sends manageable packets.
Firewalls, routers, or cloud load balancers sometimes drop idle connections faster than mysqld or the connector. Coordinate timeouts across all layers:
- Lower proxy idle timeout to match or exceed MySQL timeouts, or add keepalive probes.
- Verify security groups/iptables allow traffic to the MySQL port throughout long-running exchanges.
- Check for duplex/MTU issues by copying a large file over the same network path.
If you run in containers, ensure Kubernetes services or sidecars do not restart pods while a query is running.
Before running heavy queries, ping the connection and reconnect if necessary. Wrap statements in retry logic that distinguishes between transient CR_SERVER_LOST errors and hard failures:
// pseudo-code
try {
connection.ping({ reconnect: true });
result = await connection.query(sql);
} catch (err) {
if (isTransient(err, 2013)) {
await delay(backoff(attempt));
continue;
}
throw err;
}Combine retries with idempotency (use unique request IDs) to avoid duplicate commits.
To dig deeper, raise log_error_verbosity to 3 so mysqld writes disconnection details to the error log and inspect the slow_query_log for crashes triggered by specific statements. For proxies (ProxySQL, HAProxy, cloud SQL), ensure their idle timeout, health checks, and rewrites respect MySQL's maximum packet sizes. In Kubernetes or other orchestrated environments, check that liveness/readiness probes and pod lifecycle hooks do not drop sockets while background jobs are running. Finally, verify you are running matching client/server versions, because protocol bugs can generate garbled packets that the server treats as connection loss.
ERROR 1064: You have an error in your SQL syntax
How to fix "ERROR 1064: You have an error in your SQL syntax" in MySQL
ERROR 1054: Unknown column in field list
Unknown column in field list
ER_WINDOW_RANGE_FRAME_NUMERIC_TYPE (3589): RANGE frame requires numeric ORDER BY expression
RANGE frame requires numeric ORDER BY expression in MySQL window functions
CR_ALREADY_CONNECTED (2058): Handle already connected
How to fix "CR_ALREADY_CONNECTED (2058): Handle already connected" in MySQL
ER_WINDOW_DUPLICATE_NAME (3591): Duplicate window name
How to fix ER_WINDOW_DUPLICATE_NAME (3591) in MySQL window functions