This error occurs when the CNI (Container Network Interface) plugin is not installed or configured. Fix it by installing a CNI plugin like Calico, Flannel, or Cilium and ensuring system prerequisites like IP forwarding are enabled.
The "network plugin is not ready: cni config uninitialized" error indicates that Kubernetes cannot initialize pod networking because no CNI (Container Network Interface) plugin is installed or configured. This is a critical error that prevents nodes from becoming Ready and blocks all pod scheduling. After running kubeadm init, Kubernetes creates the control plane but doesn't include a network plugin—you must install one separately. The kubelet waits for CNI configuration files in /etc/cni/net.d/ and plugin binaries in /opt/cni/bin/. Without these, nodes remain NotReady. This error commonly occurs on fresh cluster installations before applying a network add-on, or when CNI plugin pods crash or get deleted.
Enable IP forwarding and bridge netfilter (required for CNI):
# Enable immediately
sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -w net.bridge.bridge-nf-call-iptables=1
sudo modprobe br_netfilter
# Persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cni.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl -p /etc/sysctl.d/99-kubernetes-cni.confInstall CNI binaries if missing:
apt-get update && apt-get install -y kubernetes-cni
sudo systemctl restart kubeletIf starting fresh, specify pod network CIDR:
# For Flannel (use 10.244.0.0/16)
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# For Calico (use 192.168.0.0/16)
sudo kubeadm init --pod-network-cidr=192.168.0.0/16If cluster already initialized, proceed to install CNI plugin.
Apply Flannel manifest:
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# Wait for flannel pods
kubectl wait --for=condition=ready pod -l app=flannel -n kube-flannel --timeout=300s
# Verify cni0 interface created
ip link show | grep cni0
# Check node status
kubectl get nodesFor network policy support, use Calico:
# Install Calico operator
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
# Wait for CRD
kubectl wait --for=condition=Established crd/installations.operator.tigera.io --timeout=300s
# Install Calico
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml
# Verify pods running
kubectl get pods -n calico-system
# Check node status
kubectl get nodesCheck existing CNI configuration:
# List CNI configs
ls -la /etc/cni/net.d/
# Check config content
cat /etc/cni/net.d/*.conflist
# List CNI binaries
ls -la /opt/cni/bin/
# Check containerd can see CNI
grep -A5 "plugins" /etc/containerd/config.tomlIf version mismatch, check cniVersion in config matches installed plugins.
Restart container runtime and kubelet:
sudo systemctl restart containerd
sudo systemctl restart kubeletWatch logs for CNI initialization:
# View kubelet logs
journalctl -u kubelet -f | grep -i cni
# View containerd logs
journalctl -u containerd -f | grep -i cni
# Check node conditions
kubectl describe node <node-name>
# Look for: NetworkUnavailable condition
# View CNI plugin pod logs
kubectl logs -n kube-system -l k8s-app=flannel --tail=100Once CNI initializes, nodes should transition to Ready.
CNI Initialization Sequence:
1. kubeadm init starts kubelet and static pods
2. Kubelet waits for CNI config in /etc/cni/net.d/
3. Container runtime loads CNI binaries from /opt/cni/bin/
4. Network plugin creates cni0 bridge and configures pod IPs
5. Nodes transition to Ready state
6. CoreDNS and other system pods start scheduling
Container Runtime Compatibility:
- containerd v1.6.0-v1.6.3 has known CNI issues—upgrade to v1.6.4+
- CRI-O must be restarted after CNI plugin installation
- All require: /opt/cni/bin/ binaries + /etc/cni/net.d/ config files
Common Misconfigurations:
1. Pod CIDR overlaps with host network
2. Firewall blocking CNI ports (Flannel: 8285/UDP, Calico: 179/TCP)
3. Missing IAM permissions (AWS EKS: AmazonEKS_CNI_Policy)
4. Applying CNI manifest before kubelet is ready
Recovery Strategy:
1. Cordon affected nodes: kubectl cordon <node>
2. Fix CNI installation
3. Restart kubelet: systemctl restart kubelet
4. Verify Ready: kubectl get nodes
5. Uncordon: kubectl uncordon <node>
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes