Preempted pods were evicted to make room for higher-priority pods. Adjust priority classes, add cluster capacity, or review resource requests.
Pod preemption occurs when a higher-priority pod cannot be scheduled due to insufficient resources, so the scheduler evicts lower-priority pods to make room. Unlike node-pressure eviction (kubelet protecting node health), preemption is a scheduling decision to prioritize important workloads. Preemption is controlled by PriorityClass resources. Pods with higher priority values can preempt pods with lower values. The scheduler terminates victim pods gracefully and schedules the pending high-priority pod on the freed resources.
Check why the pod was preempted:
kubectl describe pod <preempted-pod>Look at Events for:
- "Preempted by <namespace>/<pod-name>" - shows what preempted it
- Priority class of victim and preemptor
Check priority classes in use:
kubectl get priorityclasses
kubectl get pod <pod-name> -o yaml | grep priorityClassNameAudit priority usage across the cluster:
# See all pods with their priorities
kubectl get pods -A -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,PRIORITY:.spec.priorityClassNameCommon priority hierarchy:
# System critical (built-in)
system-cluster-critical: 2000000000
system-node-critical: 2000001000
# Production workloads
production: 1000000
# Default/batch workloads
default: 0
batch: -100Ensure non-critical workloads don't have elevated priorities.
If legitimate high-priority pods are preempting needed workloads:
# Check node utilization
kubectl top nodes
# Check pending pods
kubectl get pods --field-selector=status.phase=PendingSolutions:
- Add more nodes to the cluster
- Use larger node instance types
- Enable cluster autoscaler for dynamic scaling
# Example: Scale node pool (cloud-specific)
# GKE
gcloud container clusters resize CLUSTER --num-nodes=5
# EKS (via node group)
aws eks update-nodegroup-config --cluster-name CLUSTER --nodegroup-name NODEGROUP --scaling-config minSize=3,maxSize=10,desiredSize=5Create priority classes that queue without preempting:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority-non-preempting
value: 1000000
preemptionPolicy: Never # Won't evict other pods
globalDefault: false
description: "High priority but won't preempt others"Pods with this class wait in queue rather than evicting existing pods.
PDBs limit voluntary disruptions but don't prevent preemption directly. They're more useful for planned operations:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: app-pdb
spec:
minAvailable: 2 # or maxUnavailable: 1
selector:
matchLabels:
app: critical-appFor preemption protection, use appropriate priority classes instead.
Right-size resource requests to reduce resource contention:
# Check actual vs requested resources
kubectl top pods
kubectl get pods -o custom-columns=NAME:.metadata.name,REQ_CPU:.spec.containers[*].resources.requests.cpu,REQ_MEM:.spec.containers[*].resources.requests.memoryReduce over-provisioned requests:
resources:
requests:
cpu: "100m" # Actual need, not worst-case
memory: "256Mi"
limits:
cpu: "500m" # Allow bursting
memory: "512Mi"Lower requests mean more pods fit, reducing preemption pressure.
Preemption respects PodDisruptionBudgets for voluntary disruptions but can override them for critical scheduling needs. The scheduler tries to minimize preemption impact by selecting victims that:
1. Have lower priority
2. Won't violate PDBs
3. Require evicting fewer pods
RBAC can restrict who creates high-priority pods:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: priority-user
rules:
- apiGroups: ["scheduling.k8s.io"]
resources: ["priorityclasses"]
resourceNames: ["low-priority", "medium-priority"] # Not high-priority
verbs: ["use"]Monitor preemption with:
kubectl get events --field-selector reason=PreemptedNo subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes