FailedScheduling means no node can accept your pod due to resource constraints, taints, affinity rules, or other scheduling requirements. Check the specific reason in pod events.
FailedScheduling is a Kubernetes scheduler event indicating that a pod cannot be placed on any available node. The scheduler evaluates all nodes against the pod's requirements and constraints, and when none qualify, it generates this event. The event message includes the specific reason: "0/N nodes are available" followed by why each node was rejected. Common reasons include insufficient CPU/memory, taint/toleration mismatches, node affinity conflicts, or volume zone restrictions.
Get the detailed error message:
kubectl describe pod <pod-name>Look at the Events section for messages like:
- "0/3 nodes are available: 3 Insufficient cpu"
- "0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate"
- "0/3 nodes are available: 3 node(s) didn't match node selector"
- "0/3 nodes are available: 3 node(s) had volume node affinity conflict"
The message tells you exactly which constraint is blocking scheduling.
Compare pod requirements against node capacity:
# View node capacity and allocations
kubectl describe nodes | grep -A 10 "Allocated resources"
# Check actual resource usage
kubectl top nodes
# See what's consuming resources
kubectl top pods -A --sort-by=cpu
kubectl top pods -A --sort-by=memoryIf nodes are overcommitted, reduce pod requests or add nodes.
View node taints:
kubectl describe nodes | grep TaintsIf nodes are tainted, add matching tolerations:
spec:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "app"
effect: "NoSchedule"Or remove the taint:
kubectl taint nodes <node-name> dedicated:NoSchedule-Check if the pod requires specific nodes:
kubectl get pod <pod-name> -o yaml | grep -A 10 nodeSelector
kubectl get pod <pod-name> -o yaml | grep -A 20 affinityVerify nodes have the required labels:
kubectl get nodes --show-labelsAdd missing labels or adjust the pod's nodeSelector.
PersistentVolumes in specific zones require pods in the same zone:
# Check PV zone
kubectl get pv <pv-name> -o yaml | grep -A 5 nodeAffinity
# Check which zones have available nodes
kubectl get nodes -L topology.kubernetes.io/zoneSolutions:
- Add nodes to the volume's zone
- Create a new PV in a zone with available nodes
- Use a storage class with zone-spanning capability
Check if nodes are schedulable:
kubectl get nodes
# Look for SchedulingDisabled statusUncordon cordoned nodes:
kubectl uncordon <node-name>Or scale up the cluster:
# Cloud-specific scaling commands
# GKE
gcloud container clusters resize CLUSTER --num-nodes=5
# EKS (nodegroup)
eksctl scale nodegroup --cluster=CLUSTER --name=NODEGROUP --nodes=5FailedScheduling events are normal during cluster scaling or when cluster autoscaler is evaluating whether to add nodes. Brief pending periods are expected.
For complex scheduling requirements, use soft affinity (preferredDuringSchedulingIgnoredDuringExecution) instead of hard requirements when possible:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: zone
operator: In
values:
- us-east-1aPodDisruptionBudgets can indirectly cause scheduling failures by preventing pods from being evicted to make room. Review PDBs if scheduling seems blocked without obvious resource constraints.
The scheduler caches node information for performance. In rare cases, stale cache causes scheduling decisions based on outdated node states. The cache refreshes automatically, but restarting the scheduler can force a refresh.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm