This error occurs when a node has reached its maximum pod capacity, preventing new pods from being scheduled. The limit is determined by kubelet configuration and available IP addresses.
The "Too many pods" error indicates that a Kubernetes node has reached its maximum pod capacity. Every node has a hard limit on how many pods it can run, determined by two factors: the kubelet's maxPods configuration (default 110) and the number of available IP addresses. Kubernetes officially recommends not exceeding 110 pods per node to maintain cluster stability. However, cloud providers often have different limits based on instance types and networking configurations. In cloud environments, IP address exhaustion is the most common cause. Each pod typically requires its own IP from the cluster's network range, and smaller instance types support fewer network interfaces and IPs.
View the node's pod limits and current allocation:
# Check allocatable pods
kubectl get node <node-name> -o jsonpath='{.status.allocatable.pods}'
# Count current pods on node
kubectl get pods --all-namespaces --field-selector spec.nodeName=<node-name> | wc -l
# Detailed view
kubectl describe node <node-name> | grep -A5 "Allocated resources"The simplest solution is horizontal scaling—add more nodes:
# GKE
gcloud container clusters resize <cluster> --num-nodes=5
# EKS
aws eks update-nodegroup-config --cluster-name <cluster> \
--nodegroup-name <ng> --scaling-config desiredSize=5
# AKS
az aks scale --resource-group <rg> --name <cluster> --node-count 5Larger instances support more pods due to more network interfaces and IPs:
| AWS Instance | Max Pods (VPC CNI) |
|--------------|-------------------|
| t3.micro | 4 |
| t3.medium | 17 |
| m5.large | 29 |
| m5.xlarge | 58 |
Replace node groups with larger instance types for better pod density.
If resources allow, increase the kubelet maxPods setting:
# Edit kubelet config (location varies by setup)
sudo vim /var/lib/kubelet/config.yamlAdd or modify:
maxPods: 150Then restart kubelet:
sudo systemctl restart kubeletNote: This doesn't help if IP exhaustion is the actual limit.
For EKS, enable prefix delegation to dramatically increase IP capacity:
kubectl set env daemonset aws-node -n kube-system \
ENABLE_PREFIX_DELEGATION=trueThis assigns /28 IP prefixes instead of individual IPs, increasing capacity from ~29 to 110+ pods on m5.large instances.
The effective pod limit is the minimum of:
1. kubelet --max-pods setting
2. Available IP addresses (instance ENI * IPs per ENI)
3. Node resources (CPU, memory) divided by pod requests
For AWS EKS with VPC CNI, calculate max pods:
Max Pods = (ENIs * (IPs per ENI - 1)) + 2With prefix delegation enabled:
Max Pods = (ENIs * ((IPs per ENI - 1) * 16)) + 2GKE allows up to 256 pods per node with the --max-pods-per-node flag during cluster creation. Azure AKS supports up to 250 pods per node with Azure CNI.
Best practice: Scale horizontally (add nodes) rather than maximizing pod density per node. High pod counts stress the container runtime and can cause performance issues.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes