Pods stuck in Pending status cannot be scheduled to any node. Check for insufficient resources, unbound PersistentVolumeClaims, node selectors, taints/tolerations, or affinity rules.
A pod in Pending status has been accepted by Kubernetes but cannot be scheduled onto a node. The scheduler is unable to find a suitable node that satisfies all the pod's requirements. Pending is often a transient state during normal operation, but pods stuck in Pending indicate a scheduling problem. The issue could be cluster-wide (no nodes have enough resources) or pod-specific (constraints like node selectors or affinity rules cannot be satisfied).
Get the specific reason the pod cannot be scheduled:
kubectl describe pod <pod-name>Look at the Events section for messages like:
- "0/3 nodes are available: 3 Insufficient cpu"
- "0/3 nodes are available: 3 node(s) didn't match Pod's node affinity"
- "0/3 nodes are available: 3 node(s) had taint {key: value}"
- "pod has unbound immediate PersistentVolumeClaims"
If the issue is insufficient resources:
# View node capacity and allocations
kubectl describe nodes | grep -A 5 "Allocated resources"
# Check actual usage
kubectl top nodes
# View what's consuming resources
kubectl top pods --all-namespaces --sort-by=memorySolutions:
- Reduce pod resource requests
- Add more nodes to the cluster
- Delete unused pods to free resources
If the pod needs storage:
# Check PVC status
kubectl get pvc
# If PVC is Pending, check why
kubectl describe pvc <pvc-name>Common fixes:
- Create a matching PersistentVolume
- Install the CSI driver (AWS EBS, GCE PD, Azure Disk)
- Fix storage class name in PVC
- Ensure storage class allows dynamic provisioning
Check if the pod has scheduling constraints:
kubectl get pod <pod-name> -o yaml | grep -A 10 nodeSelector
kubectl get pod <pod-name> -o yaml | grep -A 20 affinityVerify nodes have matching labels:
kubectl get nodes --show-labelsFix by either:
- Adding required labels to nodes
- Removing or adjusting nodeSelector/affinity in pod spec
View node taints:
kubectl describe nodes | grep TaintsIf nodes are tainted, pods need matching tolerations:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "app"
effect: "NoSchedule"To remove a taint from a node:
kubectl taint nodes <node-name> key:NoSchedule-Check if nodes are cordoned (unschedulable):
kubectl get nodes
# Look for SchedulingDisabled statusUncordon to allow scheduling:
kubectl uncordon <node-name>Also check for node conditions:
kubectl describe node <node-name> | grep Conditions -A 10Nodes with DiskPressure, MemoryPressure, or PIDPressure won't accept new pods.
For persistent Pending issues, use the scheduler's verbose output:
kubectl get events --field-selector reason=FailedSchedulingHostPort scheduling: Pods using hostPort can only run one instance per node per port. This limits scheduling significantly. Prefer Services with NodePort instead.
Resource fragmentation: A cluster may have enough total resources but no single node can fit the pod. Consider:
- Reducing resource requests
- Using pod priority and preemption
- Enabling cluster autoscaler
For development clusters, the "LimitRange" or "ResourceQuota" objects may restrict pod creation even when nodes have capacity. Check namespace quotas:
kubectl get resourcequota -n <namespace>
kubectl get limitrange -n <namespace>Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm