This scheduling error occurs when no available nodes satisfy the pod's anti-affinity constraints, often because all nodes already have pods that conflict with the rules.
The "didn't match pod anti-affinity rules" error means Kubernetes cannot find any node that satisfies your pod's anti-affinity requirements. Pod anti-affinity is used to spread pods across nodes—commonly to ensure replicas don't run on the same node for high availability. When using requiredDuringSchedulingIgnoredDuringExecution (hard anti-affinity), the scheduler will not place the pod if no valid node exists. The pod remains Pending indefinitely until a suitable node becomes available. This typically happens when you have more replicas than nodes, or when existing pods on all nodes match the anti-affinity label selector.
View the pod's affinity configuration:
kubectl get pod <pod-name> -o yaml | grep -A 30 affinityIdentify the labelSelector and topologyKey being used.
Ensure you have enough nodes for your replicas:
# Count schedulable nodes
kubectl get nodes --no-headers | wc -l
# Check current replica count
kubectl get deployment <name> -o jsonpath='{.spec.replicas}'With per-node anti-affinity, you need at least as many nodes as replicas.
Find pods matching the anti-affinity selector:
# If anti-affinity uses app=myapp label
kubectl get pods -l app=myapp -o wideThis shows which nodes already have conflicting pods.
Change from required to preferred to allow scheduling when ideal placement isn't possible:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myapp
topologyKey: kubernetes.io/hostnameThe scheduler will try to honor the rule but place the pod somewhere if no ideal node exists.
If you need hard anti-affinity guarantees, add nodes:
# For managed Kubernetes (example: GKE)
gcloud container clusters resize <cluster> --num-nodes=5
# For self-managed, add nodes to your clusterEnsure all nodes have the label specified in topologyKey:
kubectl get nodes --show-labels | grep kubernetes.io/hostnameNodes missing the topology key label are considered invalid for anti-affinity placement.
Inter-pod affinity and anti-affinity are computationally expensive. In clusters with hundreds of nodes, these rules significantly slow scheduler performance. Consider using Pod Topology Spread Constraints instead for large clusters:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: myappThe default LimitPodHardAntiAffinityTopology admission controller restricts hard anti-affinity topologyKey to kubernetes.io/hostname for security. Custom topology keys require modifying cluster admission configuration.
For zone-level spreading (high availability across failure domains), use:
topologyKey: topology.kubernetes.io/zoneNo subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes