LimitRange policies enforce minimum and maximum resource requests/limits per pod or container. Exceeding these limits prevents pod creation and violates namespace policy.
LimitRange is a namespace-level policy that defines: 1. Min/max CPU per container 2. Min/max memory per container 3. Min/max CPU per pod 4. Min/max memory per pod 5. Default request/limit values When a pod's resources exceed LimitRange boundaries, the API server rejects the pod with a validation error. This enforces resource discipline and prevents runaway resource consumption.
Check what LimitRange policies exist in the namespace:
kubectl get limitrange -n <namespace>
kubectl describe limitrange -n <namespace>Example output:
Name: cpu-memory-limit
Namespace: default
Type Resource Min Max Request Limit Divisible By
---- -------- --- --- ------- ----- -----------
Container cpu 100m 2 - - -
Container memory 128Mi 1Gi - - -
Pod cpu 200m 4 - - -
Pod memory 256Mi 2Gi - - -This limits containers to 100m-2 CPU and pods to 200m-4 CPU.
View the pod/deployment that's being rejected:
kubectl apply -f pod.yaml # This will show the error
# Error: pods "myapp" is forbidden: pod cpu limit exceeds maximumExamine the pod spec:
kubectl get pod <pod-name> -o yaml | grep -A10 resources:Example pod spec:
spec:
containers:
- name: app
image: myapp
resources:
requests:
cpu: 2 # Requests 2 CPU (may exceed limit)
memory: 1Gi # Requests 1Gi memory
limits:
cpu: 4 # Limits 4 CPU (exceeds max of 2)
memory: 2Gi # Limits 2Gi memoryCompare against LimitRange:
- Container CPU max: 2 → limits: 4 is EXCEEDED
- Container memory max: 1Gi → limits: 2Gi is EXCEEDED
- Pod CPU max: 4 → total 4 is OK
- Pod memory max: 2Gi → total 2Gi is OK
Edit the pod/deployment to fit within limits:
kubectl edit deployment <name> -n <namespace>Reduce resource requests/limits:
spec:
containers:
- name: app
resources:
requests:
cpu: 500m # Within max of 2
memory: 256Mi # Within max of 1Gi
limits:
cpu: 1 # Within max of 2
memory: 512Mi # Within max of 1GiOr let the system apply defaults:
spec:
containers:
- name: app
# Omit requests/limits to use LimitRange defaultsLimitRange will auto-fill defaults if not specified.
Apply and verify:
kubectl apply -f deployment.yaml
kubectl get pods <pod> -o yaml | grep -A5 resources:If your application needs more than LimitRange allows, increase the limits:
kubectl edit limitrange <name> -n <namespace>Update the limits:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-memory-limit
spec:
limits:
- type: Container
min:
cpu: 100m
memory: 128Mi
max:
cpu: 4 # Increased from 2
memory: 2Gi # Increased from 1Gi
- type: Pod
min:
cpu: 200m
memory: 256Mi
max:
cpu: 8 # Increased from 4
memory: 4Gi # Increased from 2GiApply:
kubectl apply -f limitrange.yamlThen retry pod creation:
kubectl apply -f pod.yamlVerify the LimitRange is in your pod's namespace:
kubectl get limitrange -n <pod-namespace>If no LimitRange exists in the pod's namespace, the limit should not apply.
If you're using a different namespace:
kubectl get pods -n <namespace>
kubectl get limitrange -n <namespace>LimitRange is namespace-scoped, so create/update the correct one.
For temporary testing, create a namespace without LimitRange:
kubectl create namespace test
kubectl apply -f pod.yaml -n test # Should succeedIf the LimitRange is too restrictive for your use case:
kubectl delete limitrange <name> -n <namespace>Warning: Without LimitRange, pods can request unlimited resources, potentially causing cluster overload.
Better alternative: Adjust LimitRange to be more flexible:
apiVersion: v1
kind: LimitRange
metadata:
name: permissive
spec:
limits:
- type: Container
min:
cpu: 10m # Very permissive minimum
memory: 32Mi # Low memory minimum
max:
cpu: 64 # High maximum
memory: 128Gi # Very high maximum
- type: Pod
max:
cpu: 256 # Cluster-wide pod limit
memory: 256GiOr implement ResourceQuota instead (more flexible):
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
spec:
hard:
requests.cpu: "100"
requests.memory: "200Gi"
limits.cpu: "200"
limits.memory: "400Gi"LimitRange can auto-apply defaults if pods don't specify resources:
apiVersion: v1
kind: LimitRange
metadata:
name: defaults
spec:
limits:
- type: Container
default: # Applied if limits not specified
cpu: 500m
memory: 512Mi
defaultRequest: # Applied if requests not specified
cpu: 100m
memory: 128Mi
max:
cpu: 2
memory: 1GiNow pods without resource specs will automatically get defaults:
kubectl apply -f pod.yaml # No resources specified
kubectl get pod <pod> -o yaml | grep -A5 resources:
# Shows: cpu: 500m, memory: 512Mi (from default)This prevents the common pattern of uncontrolled resource usage.
After fixing the pod spec, verify it's accepted:
kubectl apply -f pod.yaml --dry-run=client -o yaml | grep -A5 resources:
kubectl apply -f pod.yaml
kubectl get pods <pod> -o yaml | grep -A5 resources:The pod should now be in Running state:
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# myapp 1/1 Running 0 2mCheck if defaults were applied:
kubectl get pod <pod> -o jsonpath='{.spec.containers[0].resources}'Should show:
{"requests":{"cpu":"100m","memory":"128Mi"},"limits":{"cpu":"500m","memory":"512Mi"}}LimitRange is useful for preventing runaway resource consumption, but it should be balanced with application needs. In multi-tenant clusters, use separate LimitRanges per tenant namespace. Combine LimitRange with ResourceQuota for comprehensive resource governance. For autoscaling (HPA, VPA), ensure LimitRange doesn't prevent desired scaling. Monitor Pod QoS classes: Guaranteed (limits=requests), Burstable (requests<limits), BestEffort (no limits) are handled differently during resource pressure. Kyverno can enforce resource policies more flexibly than LimitRange. For development, create permissive LimitRanges; for production, be more restrictive. WSL2 may have cluster-wide resource limits that interact with LimitRange—test thoroughly in target environment.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes