The Kubernetes scheduler cannot place a pod on any node because resource constraints, node selectors, taints, or infrastructure issues prevent valid placement. Pods remain in Pending state indefinitely. Fix by scaling the cluster, relaxing scheduling constraints, or ensuring nodes have required labels and resources.
The scheduler evaluated all nodes and rejected the pod for placement due to one or more constraints: insufficient CPU/memory, missing node labels (nodeSelector/affinity), untolerated taints, volume conflicts, or nodes in NotReady state. The pod is valid but unplaceable given current cluster state.
Review why scheduler rejected pod:
kubectl describe pod <pod-name> -n <namespace>Look in "Events" section for FailedScheduling messages. Common reasons:
- "Insufficient cpu" / "Insufficient memory"
- "didn't match node selector"
- "had untolerated taint"
- "volume node affinity conflict"
Verify cluster has healthy nodes:
kubectl get nodes -o wide
kubectl describe nodesLook for:
- Node status (should be Ready)
- Allocatable resources (CPU, memory)
- Conditions (MemoryPressure, DiskPressure, NotReady)
Compare pod requests with available resources:
kubectl describe pod <pod-name> -n <namespace> | grep -A 5 "resources:"
kubectl describe node <node-name> | grep -A 10 "Allocated resources"Pod requests must fit on at least one node. If requests exceed cluster capacity, either reduce requests or add nodes.
If pod has nodeSelector, verify nodes have matching labels:
kubectl get nodes --show-labels
kubectl get pod <pod-name> -o yaml | grep -A 3 "nodeSelector"Labels must match exactly. Add labels if missing:
kubectl label nodes <node-name> disktype=ssdIf nodes are tainted, pod must have matching tolerations:
kubectl describe node <node-name> | grep Taints
kubectl get pod <pod-name> -o yaml | grep -A 5 "tolerations:"Add toleration to pod if needed:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "database"
effect: "NoSchedule"Add more nodes if resource exhausted:
# GKE
kubectl autoscale deployment <name> --min=2 --max=10
# Or manually scale
gcloud container node-pools update default-pool --num-nodes=5 --cluster=<cluster>
# AKS
az aks nodepool scale --resource-group <rg> --cluster-name <cluster> --name <pool> --node-count 5Namespace quotas may prevent pod creation:
kubectl describe resourcequota -n <namespace>
kubectl describe limitrange -n <namespace>If quota reached, either increase quota or delete lower-priority pods.
Use kubectl dry-run to validate pod scheduling before applying: kubectl apply --dry-run=server -f pod.yaml. For debugging, use kubectl logs -n kube-system -l component=kube-scheduler to see scheduler decisions. Prefer soft affinity (preferredDuringScheduling) over hard affinity (required) to prevent unschedulable pods. For multi-zone clusters, use pod anti-affinity to spread replicas across zones; this can cause pending if zones are unbalanced.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm