Kubernetes requires swap to be completely disabled on all cluster nodes before kubeadm can initialize. This error occurs during preflight checks when swap is detected on the system.
Kubernetes requires swap to be completely disabled on all cluster nodes (both control plane and worker nodes) before kubeadm can initialize the cluster. When kubeadm detects active swap during the preflight checks, it fails with this error because Kubernetes is designed to have full control over memory allocation. Allowing the operating system to use swap introduces unpredictability in pod memory management, performance degradation, and stability issues that compromise cluster reliability. The scheduler cannot account for swap memory when making pod placement decisions, and pods do not request swap space—only memory. Historically, swap was considered incompatible with Kubernetes because it makes memory limits behave unexpectedly and pods can experience performance penalties when pages are swapped to disk.
Verify whether swap is currently enabled:
free -hIf the Swap row shows a value greater than 0 in the "total" column, swap is enabled. You can also check with:
swapon --showOr view your fstab configuration:
cat /etc/fstabLook for any lines containing the word "swap" (not commented out with #).
Disable swap immediately without rebooting:
sudo swapoff -aVerify swap is now disabled:
free -hThe Swap row should now show 0 total. Note: This change is temporary and swap will re-enable on the next reboot unless you also modify /etc/fstab.
Comment out all swap entries in /etc/fstab to prevent swap from re-enabling on reboot:
sudo sed -i '/ swap / s/^/#/' /etc/fstabVerify the changes:
cat /etc/fstabAll swap-related lines should now start with #. If you prefer manual editing:
sudo nano /etc/fstabFind lines containing "swap" and add # at the beginning.
After modifying /etc/fstab, check that the file syntax is correct:
sudo mount -aIf no errors appear, the syntax is valid. Swap lines should look like:
# /swapfile swap swap defaults 0 0Note the # at the beginning indicating the line is commented out.
Now that swap is disabled, initialize your Kubernetes cluster:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16Replace the pod-network-cidr with your desired network range if using a different CNI plugin. If the error persists, ensure:
1. All nodes (control plane and workers) have swap disabled
2. Run sudo swapoff -a if swap somehow re-enabled
3. Check for multiple swap devices with swapon --show
Confirm swap remains disabled across reboots:
sudo rebootAfter the system boots, verify swap is still disabled:
free -hSwap should show 0 total. Also verify your Kubernetes nodes are healthy:
kubectl get nodes -o wideAll nodes should show status "Ready". If a node shows "NotReady" after reboot, check kubelet status:
sudo systemctl status kubeletKubernetes has evolved its swap support over recent versions. Starting with Kubernetes 1.22, alpha support for swap was introduced through the NodeSwap feature gate, allowing limited swap usage on a per-node basis. By Kubernetes 1.28, swap support on Linux nodes graduated to Beta status with significant improvements and cgroup v2 support.
However, swap support requires explicit feature gate activation and is not recommended for production clusters without thorough testing. The NodeSwap feature only supports cgroup v2 (not cgroup v1), and support is primarily for experimental or edge-case scenarios.
For a standard Kubernetes cluster, disabling swap remains the recommended and simplest approach. The pre-1.22 workaround of using --fail-swap-on=false flag is deprecated.
If swap must be used on Kubernetes 1.22+, enable the NodeSwap feature gate and configure memorySwap.swapBehavior (LimitedSwap is default).
Most production deployments should keep swap disabled to maintain predictable performance and avoid memory management complexities.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
ReplicaSet has timed out progressing
How to fix "ReplicaSet has timed out progressing" in Kubernetes