Evicted pods were terminated by kubelet due to node resource pressure (memory, disk, or ephemeral storage). Set proper resource limits, clean up disk space, or scale your cluster.
Pod eviction occurs when the kubelet proactively terminates pods to reclaim resources on a node experiencing pressure. Unlike application crashes, eviction is an intentional Kubernetes mechanism to maintain node stability. When a node runs low on memory, disk space, or ephemeral storage, kubelet evicts pods based on their Quality of Service (QoS) class and resource usage. BestEffort pods (no resource limits) are evicted first, followed by Burstable pods exceeding their requests, with Guaranteed pods evicted last.
Get the eviction reason:
kubectl describe pod <pod-name>Look for messages like:
- "The node was low on resource: memory"
- "The node was low on resource: ephemeral-storage"
- "Pod ephemeral local storage usage exceeds the total limit"
Check node conditions:
kubectl describe node <node-name> | grep -A 10 ConditionsEvicted pods remain as objects taking up etcd space. Remove them:
# Delete all evicted pods in current namespace
kubectl get pods --field-selector=status.phase=Failed | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
# Delete across all namespaces
kubectl get pods -A --field-selector=status.phase=Failed | grep Evicted | awk '{print $1 " -n " $2}' | xargs -L1 kubectl delete podImprove QoS class to reduce eviction priority:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"QoS Classes:
- Guaranteed (requests = limits): Evicted last
- Burstable (requests < limits): Medium priority
- BestEffort (no requests/limits): Evicted first
Set at least requests to avoid BestEffort status.
For memory-related evictions:
# Check node memory
kubectl top nodes
# Check pod memory usage
kubectl top pods --sort-by=memorySolutions:
- Increase node memory (larger instance type)
- Add more nodes to distribute load
- Reduce memory limits on less critical pods
- Fix memory leaks in applications
- Use HorizontalPodAutoscaler to scale out instead of up
For disk-related evictions:
# On the node, check disk usage
df -h
du -sh /var/lib/docker/*
du -sh /var/lib/containerd/*
du -sh /var/log/*Clean up:
# Remove unused images
crictl rmi --prune
# Clean container logs (careful - loses logs)
truncate -s 0 /var/lib/docker/containers/*/*-json.logSet ephemeral-storage limits on pods to prevent unbounded growth.
Protect important pods from eviction:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "High priority for critical services"
---
apiVersion: v1
kind: Pod
metadata:
name: critical-pod
spec:
priorityClassName: high-priority
containers:
- name: app
image: myappHigher priority pods are evicted after lower priority ones.
Eviction thresholds are configurable via kubelet flags:
- --eviction-hard: Immediate eviction (default: memory.available<100Mi, nodefs.available<10%)
- --eviction-soft: Eviction after grace period
- --eviction-soft-grace-period: How long to wait before soft eviction
For I/O-intensive workloads, kernel page cache can trigger false memory pressure. The cache is reclaimable but kubelet may not account for it correctly. Setting memory requests = limits helps prevent this.
Monitor eviction patterns with:
kubectl get events --field-selector reason=EvictedConsider cluster autoscaler to add nodes automatically when resource pressure increases.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes