LimitRange validation failures occur when pod resource specifications violate namespace policies. Pods cannot be created until their resources comply with minimum/maximum constraints.
LimitRange validation occurs during pod admission: 1. API server checks pod resources against LimitRange 2. If resources violate min/max rules, validation fails 3. Pod is rejected with a 403 Forbidden error 4. Pod cannot be created or updated Common violations: - Container CPU request < minimum (min cpu not met) - Container memory request > maximum (exceeds max memory) - Pod total CPU/memory exceeds aggregate limits - Resource specification missing but LimitRange has defaults that conflict
Run kubectl apply to see the detailed error:
kubectl apply -f pod.yaml
# Output:
# Error from server (Forbidden): error when creating "pod.yaml":
# pods "myapp" is forbidden: container spec cpu limit "100m" is less than
# the minimum "500m" allowed by the limit range "cpu-limit"The error specifies:
- The constraint violated (cpu limit, memory request, etc.)
- The violation type (below minimum or exceeds maximum)
- The LimitRange name that caused it
Determine if it's:
- A Container-level limit (affects individual containers)
- A Pod-level limit (affects total across all containers)
Get the LimitRange definition:
kubectl get limitrange <name> -n <namespace> -o yamlExample:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit
spec:
limits:
- type: Container
min:
cpu: 500m # Minimum CPU per container
max:
cpu: 2 # Maximum CPU per container
default: # Default if not specified
cpu: 1
defaultRequest:
cpu: 100mUnderstand what each field means:
- min: Minimum resource request allowed
- max: Maximum resource limit allowed
- default: Applied as limit if not specified
- defaultRequest: Applied as request if not specified
Extract your pod resources:
kubectl get pod <pod> -o jsonpath='{.spec.containers[].resources}' | jq .Example pod spec:
spec:
containers:
- name: app
resources:
requests:
cpu: 100m # Might violate minimum
limits:
cpu: 200m # Might exceed maximumCompare to LimitRange:
- Requests 100m CPU but minimum is 500m → VIOLATION
- If limit is 200m but max is 2 → OK
- If limit is 3 but max is 2 → VIOLATION
For multiple containers, add all CPU/memory:
containers:
- name: app
resources:
requests:
cpu: 500m
memory: 256Mi
- name: sidecar
resources:
requests:
cpu: 300m
memory: 128Mi
# Total: 800m CPU, 384Mi memory
# Pod limit max might be 1000m CPU, 512Mi memoryEdit the pod/deployment to fit within LimitRange:
kubectl edit deployment <name> -n <namespace>Adjust resources:
spec:
template:
spec:
containers:
- name: app
resources:
requests:
cpu: 500m # Meets minimum
memory: 256Mi
limits:
cpu: 1 # Below maximum of 2
memory: 512Mi # Below maximum of 1GiOr use LimitRange defaults by omitting resources:
spec:
template:
spec:
containers:
- name: app
image: myapp
# No resources specified - LimitRange will apply defaultsApply and verify:
kubectl apply -f deployment.yaml
kubectl get pods <pod> -o yaml | grep -A10 resources:For pods with multiple containers, ensure aggregate stays within pod limits:
kubectl get limitrange <name> -n <namespace> -o yaml | grep -A15 "type: Pod"Example pod-level limit:
limits:
- type: Pod
max:
cpu: 2000m
memory: 2GiPod with multiple containers:
spec:
containers:
- name: app
resources:
limits:
cpu: 1000m # app container
memory: 1Gi
- name: sidecar
resources:
limits:
cpu: 500m # sidecar container
memory: 512Mi
# Total: 1500m CPU, 1536Mi memory
# Must be <= pod limit of 2000m CPU, 2Gi memoryIf aggregate exceeds pod limit, reduce per-container limits:
- name: app
resources:
limits:
cpu: 1500m # Reduced from 1000m
memory: 1Gi
- name: sidecar
resources:
limits:
cpu: 400m # Reduced from 500m
memory: 256Mi # Reduced from 512MiIf your application requirements exceed LimitRange, update the limits:
kubectl get limitrange -n <namespace>
kubectl describe limitrange <name> -n <namespace>Edit to increase limits:
kubectl edit limitrange <name> -n <namespace>Example update:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit
spec:
limits:
- type: Container
min:
cpu: 100m # Lowered from 500m
max:
cpu: 4 # Increased from 2
- type: Pod
max:
cpu: 8 # Increased from 4
memory: 4Gi # Increased from 2GiApply and retry pod creation:
kubectl apply -f pod.yamlOnly do this if you've verified your application truly needs these resources.
Test pod creation without actually creating it:
kubectl apply -f pod.yaml --dry-run=client
# If this succeeds, LimitRange validation should pass on actual applyFor more detailed validation:
kubectl apply -f pod.yaml --dry-run=server
# Performs server-side validation (including LimitRange)If dry-run-server fails with LimitRange error, the actual apply will also fail.
Debug by extracting the defaults that will be applied:
kubectl apply -f pod.yaml --dry-run=server -o yaml | grep -A10 resources:
# Shows what defaults LimitRange appliedOnce dry-run succeeds, apply for real:
kubectl apply -f pod.yaml
kubectl get pods <pod>Prevent recurrence by documenting the constraints:
kubectl get limitrange -n <namespace> -o yaml > limitrange-policy.yamlCreate a template for developers:
# template-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: app
image: myapp
resources:
requests:
cpu: 200m # At least 100m per LimitRange min
memory: 256Mi # At least 128Mi per LimitRange min
limits:
cpu: 1 # At most 2 per LimitRange max
memory: 512Mi # At most 1Gi per LimitRange maxDocument in wiki or README:
## Resource Requirements
All pods must comply with namespace LimitRange:
- Min CPU per container: 100m
- Max CPU per container: 2000m
- Min memory per container: 128Mi
- Max memory per container: 1Gi
See `kubectl get limitrange` for current policy.LimitRange validation failures are less common than exceeding limits because Kubernetes auto-applies defaults. However, conflicts between LimitRange defaults and explicit pod specs can cause validation failures. For production, use admission webhooks (Kyverno, OPA) instead of LimitRange for more flexible policy enforcement. Multi-tenant clusters should have separate LimitRanges per tenant namespace with appropriate min/max values. Combine LimitRange with ResourceQuota for comprehensive governance. Developers should understand LimitRange constraints in their environment—include in onboarding. For debugging, always compare pod spec resources to LimitRange limits side-by-side. Monitoring tools can warn when pods approach LimitRange max values, helping with capacity planning.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes