NetworkUnavailable indicates the node's CNI is not properly configured. The scheduler prevents new pods from being scheduled on that node because pod networking cannot be established.
The NetworkUnavailable node condition in Kubernetes indicates that the node's Container Network Interface (CNI) is not properly configured or has encountered a connectivity problem. When this condition is True, the Kubernetes scheduler prevents new pods from being allocated to that node because the node cannot provide networking capabilities. The NetworkUnavailable condition is typically managed by the CNI provider (such as Calico, Flannel, or Cilium) and reflects whether that provider can successfully configure container networking on the node. If the CNI cannot access the Kubernetes API Server, lacks proper permissions (RBAC), or encounters network configuration errors during startup, it will mark the node as unavailable. Unlike some other node conditions, NetworkUnavailable problems that occur at runtime may not always be reflected in the node condition status, making it essential to check both the condition and the CNI plugin logs.
Start by confirming which nodes have the NetworkUnavailable condition:
# Check all node conditions, focusing on NetworkUnavailable
kubectl describe nodes | grep -A 20 "Conditions:"
# Get a compact view of all nodes and their NetworkUnavailable status
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.conditions[?(@.type=="NetworkUnavailable")].status}{"\n"}{end}'
# Describe a specific problematic node
kubectl describe node <node-name>Look for any node showing NetworkUnavailable True in the Conditions section.
Verify that the CNI plugin pods are deployed and actively running:
# List all CNI plugin pods
kubectl get pods -n kube-system -l k8s-app=calico-node
kubectl get pods -n kube-system -l k8s-app=flannel
kubectl get pods -n kube-system -l k8s-app=cilium
# Check the overall kube-system namespace for CNI-related pods
kubectl get pods -n kube-system | grep -E 'calico|flannel|cilium|canal'If no CNI plugin is deployed, that is the root cause. Install the appropriate CNI plugin:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yamlCheck the logs of the CNI plugin pods running on the problematic node:
# For Calico
kubectl logs -n kube-system -l k8s-app=calico-node --tail=100
# For Flannel
kubectl logs -n kube-system -l app=flannel --tail=100
# For Cilium
kubectl logs -n kube-system -l k8s-app=cilium --tail=100
# Look for specific error patterns
kubectl logs -n kube-system -l k8s-app=calico-node 2>&1 | grep -i 'error\|failed\|refused\|unauthorized'Common error patterns:
- "error accessing apiserver" - network/firewall issue
- "forbidden" or "unauthorized" - RBAC permission problem
- "no such file or directory" - missing CNI binary or configuration
- "IP allocation failure" - CIDR exhaustion or configuration mismatch
Ensure the pod network CIDR is correctly configured and does not overlap with other network ranges:
# Check the cluster's pod CIDR configuration
kubectl cluster-info dump | grep -i "pod-network-cidr\|cluster-cidr"
# Get pod CIDR assigned to each node
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'If a node has no podCIDR assigned, manually patch it:
kubectl patch node <node-name> -p '{"spec":{"podCIDR":"10.244.X.0/24"}}'Confirm that pod CIDR, service CIDR, and any VPC/infrastructure CIDR ranges do not overlap.
Test network connectivity between the problematic node and the control plane:
# SSH into the node
ssh <node-ip>
# Check routes on the node
ip route show
# Check if the CNI binary exists on the node
ls -la /opt/cni/bin/
# Test connectivity to other nodes' pod networks
ping <pod-ip-from-another-node>If routes are missing, the CNI plugin has not yet programmed them. Restart the CNI pod on that node to trigger route re-initialization.
Force the CNI plugin pods to restart:
# Delete CNI pods on the problematic node (DaemonSet will recreate them)
kubectl delete pods -n kube-system -l k8s-app=calico-node --field-selector spec.nodeName=<node-name>
# Watch for the pods to restart
kubectl get pods -n kube-system -w | grep -E 'calico|flannel|cilium'
# Once pods are Running, check the node condition again
kubectl describe node <node-name> | grep -A 5 "Conditions:"After pod restart, the NetworkUnavailable condition should transition to False within a few seconds.
NetworkUnavailable condition management varies by CNI plugin. Calico updates this condition actively during operation, while some CNI plugins only set it during initialization. If a node's condition remains False despite CNI problems, you may have a 'metadata bug' where the plugin cannot update the condition but pods cannot reach their IP addresses.
CNI plugins use different network models: Calico uses unencapsulated IP with BGP routing (requires TCP 179 or UDP 4789 for VXLAN), Flannel uses overlay networks with VXLAN/host-gw encapsulation, and Cilium uses eBPF for high-performance networking. CNI chaining is also possible (e.g., Cilium with Calico for network policies).
In cloud environments (AWS, GCP, Azure), the cloud controller manager (CCM) may also manage node routes. If CCM fails to create routes due to throttling or permissions, the node will be unable to send traffic to pod CIDRs on other nodes. Verify IAM/cloud permissions if running on managed Kubernetes.
When migrating between CNI plugins, use tools like Multus or perform a controlled live migration. Pod IPs will change during migration.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm