This scheduling error occurs when all available nodes are marked as unschedulable, typically because they have been cordoned for maintenance or have NoSchedule taints.
The "node(s) were unschedulable" error indicates that the Kubernetes scheduler cannot place your pod on any available node because those nodes are marked as unschedulable. This happens when nodes have been cordoned (manually marked unschedulable) or have the node.kubernetes.io/unschedulable:NoSchedule taint. When a node is cordoned, it shows a SchedulingDisabled status. Existing pods continue running, but no new pods can be scheduled there. This is commonly used during planned maintenance, node upgrades, or when draining nodes. The error often appears alongside other scheduling failures (insufficient resources, taints not tolerated), so you may need to address multiple issues to successfully schedule your pod.
List all nodes and their scheduling status:
kubectl get nodesLook for nodes with SchedulingDisabled. Then check taints:
kubectl describe node <node-name> | grep -A5 TaintsIf nodes were cordoned for maintenance that's now finished, uncordon them:
# Uncordon a specific node
kubectl uncordon <node-name>
# Uncordon all nodes
kubectl get nodes -o name | xargs -I{} kubectl uncordon {}Remove taints that are blocking scheduling:
# Remove a specific taint
kubectl taint node <node-name> node.kubernetes.io/unschedulable:NoSchedule-
# For control plane nodes (if you want to schedule pods there)
kubectl taint node <node-name> node-role.kubernetes.io/control-plane:NoSchedule-Note the minus sign (-) at the end removes the taint.
If the taints are intentional, add tolerations to your pod spec:
spec:
tolerations:
- key: "node.kubernetes.io/unschedulable"
operator: "Exists"
effect: "NoSchedule"This allows the pod to be scheduled on cordoned nodes.
If all nodes are legitimately unavailable, add more nodes:
# For managed Kubernetes (GKE example)
gcloud container clusters resize <cluster> --num-nodes=3
# For EKS with managed node groups
aws eks update-nodegroup-config --cluster-name <cluster> \
--nodegroup-name <nodegroup> --scaling-config minSize=3,maxSize=5,desiredSize=3The SchedulingDisabled status is distinct from node readiness. A cordoned node can be Ready but unschedulable—it's healthy and running pods, just not accepting new ones.
When using kubectl drain, nodes are automatically cordoned first. After drain completes and you've performed maintenance, remember to uncordon:
kubectl drain <node> --ignore-daemonsets --delete-emptydir-data
# Perform maintenance
kubectl uncordon <node>Different Kubernetes versions use different taint keys for control plane nodes:
- Pre-1.24: node-role.kubernetes.io/master:NoSchedule
- 1.24+: node-role.kubernetes.io/control-plane:NoSchedule
Check your cluster version when managing control plane taints.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes