This admission error occurs when ResourceQuota or LimitRange requires containers to specify resource limits. Fix it by adding explicit limits.cpu and limits.memory to your container spec.
The "must specify limits.cpu, limits.memory" error occurs when a namespace has ResourceQuota or LimitRange policies that require all containers to declare resource limits, but your Pod spec is missing these fields. When ResourceQuota tracks CPU/memory limits, Kubernetes requires all pods to specify explicit limits so the quota controller can accurately track namespace usage. Similarly, LimitRange can enforce that containers must have limits defined. This is an admission-time validation—the API server rejects the Pod before it's created. Adding the required limits fields to your container spec resolves the issue.
View ResourceQuota and LimitRange:
# Check ResourceQuota
kubectl get resourcequota -n <namespace>
kubectl describe resourcequota -n <namespace>
# Check LimitRange
kubectl get limitrange -n <namespace>
kubectl describe limitrange -n <namespace>If ResourceQuota has limits.cpu or limits.memory, pods must specify these.
Add both requests and limits to your container:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: app
image: nginx:latest
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "500m" # Required
memory: "512Mi" # RequiredApply:
kubectl apply -f pod.yaml -n <namespace>Add resources to deployment template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: app
image: my-image
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"Redeploy:
kubectl apply -f deployment.yaml
kubectl rollout status deployment/my-appEach container needs its own limits:
spec:
containers:
- name: app
image: myapp:v1
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
- name: sidecar
image: sidecar:v1
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "100m" # Sidecar also needs limits
memory: "128Mi"Admin can create defaults so pods don't need explicit limits:
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: <namespace>
spec:
limits:
- type: Container
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "250m"
memory: "256Mi"After applying, pods without limits receive these defaults automatically.
For Helm deployments, set resources in values.yaml:
# values.yaml
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"Template uses these values:
# templates/deployment.yaml
resources:
{{- toYaml .Values.resources | nindent 6 }}Install with resources:
helm upgrade --install myapp ./chart -n <namespace>QoS Classes Based on Limits:
- Guaranteed: requests = limits for all containers
- Burstable: At least one container has requests or limits
- BestEffort: No requests or limits (rejected when limits required)
CPU vs Memory Units:
- CPU: m (millicores), e.g., 500m = 0.5 cores
- Memory: Mi (mebibytes), Gi (gibibytes)
Enforcement Mechanisms:
- CPU limits: Throttled via cgroups (not killed)
- Memory limits: OOMKilled if exceeded (exit code 137)
Best Practices:
1. Set requests based on typical usage
2. Set limits based on maximum acceptable usage
3. For Guaranteed QoS: requests = limits
4. For Burstable: limits 2-4x requests (allows bursting)
5. Always set memory limits to prevent OOM affecting other pods
Limit-to-Request Ratio:
LimitRange can enforce maximum ratio:
maxLimitRequestRatio:
cpu: "2" # Limit can be max 2x request
memory: "2"No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes