This MySQL connection error occurs when the client loses connection to the MySQL server, typically due to network issues, server timeouts, or server restarts. The error indicates that the MySQL server terminated the connection unexpectedly, often during long-running queries or idle periods.
The CR_SERVER_GONE_ERROR (2006) is a MySQL client error that occurs when the connection between the client application and MySQL server is lost. This can happen for several reasons: 1. **Network Issues**: Physical network problems, firewall changes, or routing issues can disconnect the client from the server. 2. **Server Timeouts**: MySQL has several timeout settings (wait_timeout, interactive_timeout) that automatically close idle connections. If a connection remains idle longer than these timeouts, the server closes it. 3. **Server Restarts/Crashes**: If the MySQL server is restarted, crashes, or is killed, all active connections are terminated. 4. **Resource Limitations**: The server may close connections due to memory pressure, too many connections (max_connections), or other resource constraints. 5. **Long-running Queries**: Queries that exceed certain limits (max_execution_time in MySQL 5.7+/8.0+) may be killed by the server. This error is particularly common in applications with connection pooling, long-running background jobs, or applications that maintain persistent connections to the database.
Examine and potentially increase MySQL server timeout settings:
-- Check current timeout values
SHOW VARIABLES LIKE 'wait_timeout';
SHOW VARIABLES LIKE 'interactive_timeout';
-- Typical default is 28800 seconds (8 hours)
-- For applications with connection pooling, consider:
SET GLOBAL wait_timeout = 86400; -- 24 hours
SET GLOBAL interactive_timeout = 86400; -- 24 hours
-- For persistent changes, add to my.cnf/my.ini:
[mysqld]
wait_timeout = 86400
interactive_timeout = 86400Note: Changing global variables requires appropriate privileges and may require server restart for my.cnf changes.
Add connection checking before using database connections:
# Python example with mysql-connector-python
import mysql.connector
from mysql.connector import Error
def get_connection():
try:
# Try to ping existing connection
if hasattr(get_connection, 'cached_conn'):
try:
get_connection.cached_conn.ping(reconnect=True, attempts=3, delay=1)
return get_connection.cached_conn
except Error:
# Connection failed, create new one
pass
# Create new connection
conn = mysql.connector.connect(
host='localhost',
user='user',
password='password',
database='mydb',
connection_timeout=30,
pool_size=10,
pool_reset_session=True
)
get_connection.cached_conn = conn
return conn
except Error as e:
print(f"Connection failed: {e}")
raiseSimilar patterns apply to other languages - always validate connections before use.
Use connection pooling with validation queries and appropriate timeouts:
// Java example with HikariCP
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://localhost:3306/mydb");
config.setUsername("user");
config.setPassword("password");
config.setMaximumPoolSize(10);
config.setMinimumIdle(5);
config.setConnectionTimeout(30000); // 30 seconds
config.setIdleTimeout(600000); // 10 minutes
config.setMaxLifetime(1800000); // 30 minutes
config.setConnectionTestQuery("SELECT 1");
config.setValidationTimeout(5000); // 5 seconds
// Important: Set MySQL-specific properties
config.addDataSourceProperty("autoReconnect", "true");
config.addDataSourceProperty("maxReconnects", "3");
config.addDataSourceProperty("initialTimeout", "2");
HikariDataSource dataSource = new HikariDataSource(config);Different pooling libraries have similar configuration options for handling stale connections.
If you're transferring large queries or result sets, increase max_allowed_packet:
-- Check current value
SHOW VARIABLES LIKE 'max_allowed_packet';
-- Increase (default is 4MB-64MB depending on version)
SET GLOBAL max_allowed_packet = 67108864; -- 64MB
-- For persistent changes, add to my.cnf:
[mysqld]
max_allowed_packet = 64M
-- Also check client-side setting if using certain connectors
[mysql]
max_allowed_packet = 64MLarge BLOB/CLOB data, bulk inserts, or complex queries may exceed the default packet size.
Long-running queries can trigger timeouts or be killed by the server:
-- Find slow queries
SHOW PROCESSLIST;
-- Enable slow query log
SET GLOBAL slow_query_log = 1;
SET GLOBAL long_query_time = 2; -- queries longer than 2 seconds
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';
-- Check for queries being killed
SHOW GLOBAL STATUS LIKE 'aborted_connects';
SHOW GLOBAL STATUS LIKE 'aborted_clients';
-- Analyze and optimize slow queries
EXPLAIN SELECT * FROM large_table WHERE unindexed_column = 'value';
-- Consider adding indexes
CREATE INDEX idx_column ON large_table(unindexed_column);Regular query optimization reduces the chance of queries being killed due to timeout.
Add intelligent retry logic for transient connection errors:
// Node.js example with exponential backoff
async function executeWithRetry(query, params, maxRetries = 3) {
let lastError;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const connection = await pool.getConnection();
try {
const [rows] = await connection.execute(query, params);
connection.release();
return rows;
} catch (err) {
connection.release();
throw err;
}
} catch (err) {
lastError = err;
// Check if it's a "server gone away" error
if (err.code === 'PROTOCOL_CONNECTION_LOST' ||
err.errno === 2006 ||
err.message.includes('server has gone away')) {
// Exponential backoff: 100ms, 200ms, 400ms
const delay = Math.pow(2, attempt - 1) * 100;
await new Promise(resolve => setTimeout(resolve, delay));
// Try to re-establish pool connections
if (attempt < maxRetries) {
console.log(`Retrying query (attempt ${attempt}/${maxRetries})...`);
continue;
}
}
// Not a retryable error or out of retries
break;
}
}
throw lastError;
}Implement similar retry patterns in your application framework.
The CR_SERVER_GONE_ERROR is a complex issue with multiple potential causes. Advanced considerations:
1. Load Balancers and Proxies: If using a database proxy (ProxySQL, HAProxy) or cloud provider load balancer, check their timeout settings which may be shorter than MySQL's timeouts.
2. Kubernetes and Container Environments: In containerized environments, network policies, service mesh sidecars, or pod restarts can cause connection drops. Use readiness probes and connection draining.
3. MySQL 8.0+ Improvements: MySQL 8.0 introduced better connection management features. Consider upgrading if using older versions.
4. Replication and Cluster Environments: In replication setups, connections might fail over to replicas. Ensure your application handles failover correctly.
5. Cloud Database Services: AWS RDS, Google Cloud SQL, and Azure Database for MySQL have their own connection management characteristics and best practices.
6. Application Server Keep-Alive: Some application servers (like PHP-FPM) have their own connection persistence settings that may conflict with MySQL timeouts.
For mission-critical applications, consider implementing a circuit breaker pattern to gracefully handle database outages and prevent cascading failures.
ERROR 1064: You have an error in your SQL syntax
How to fix "ERROR 1064: You have an error in your SQL syntax" in MySQL
ERROR 1054: Unknown column in field list
Unknown column in field list
ER_WINDOW_RANGE_FRAME_NUMERIC_TYPE (3589): RANGE frame requires numeric ORDER BY expression
RANGE frame requires numeric ORDER BY expression in MySQL window functions
CR_ALREADY_CONNECTED (2058): Handle already connected
How to fix "CR_ALREADY_CONNECTED (2058): Handle already connected" in MySQL
ER_WINDOW_DUPLICATE_NAME (3591): Duplicate window name
How to fix ER_WINDOW_DUPLICATE_NAME (3591) in MySQL window functions