This error occurs when a container's memory limit exceeds the namespace LimitRange maximum. Fix it by lowering the memory limit to stay within the maximum constraint or requesting a LimitRange adjustment.
The "maximum memory usage per Container" error indicates that your container's memory limit exceeds the maximum threshold enforced by a LimitRange in the namespace. LimitRanges prevent any single container from consuming excessive resources. When you specify a memory limit higher than the LimitRange maximum (e.g., requesting 2Gi when max is 1Gi), the admission controller rejects the Pod. This protects the namespace from resource-hungry containers affecting other workloads. Unlike ResourceQuotas which limit total namespace consumption, LimitRange maximums constrain individual containers. This ensures fair resource distribution even within quota limits.
View the current LimitRange:
kubectl get limitrange -n <namespace>
kubectl describe limitrange <name> -n <namespace>Look for:
- max.memory: Maximum memory limit allowed
- default.memory: Default limit if not specified
Check what your pod is requesting:
# Dry-run to see computed values
kubectl apply -f pod.yaml --dry-run=client -o yaml | grep -A5 memory
# Or describe existing failed pod attempt
kubectl describe pod <pod-name> -n <namespace>Compare your limits to the LimitRange maximum.
Reduce the memory limit to fit within the maximum:
# Before (fails if max is 1Gi)
resources:
limits:
memory: "1536Mi"
# After (within max)
resources:
requests:
memory: "512Mi"
limits:
memory: "900Mi" # Below 1Gi maximumApply the corrected manifest:
kubectl apply -f pod.yaml -n <namespace>Modify the deployment template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: app
image: my-image
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi" # Within LimitRange maxApply and verify:
kubectl apply -f deployment.yaml
kubectl get pods -n <namespace>If your application legitimately needs more memory:
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: <namespace>
spec:
limits:
- type: Container
max:
memory: "2Gi" # Increased from 1Gi
min:
memory: "64Mi"
default:
memory: "512Mi"
defaultRequest:
memory: "256Mi"Admin applies the updated LimitRange.
For predictable behavior, set requests = limits:
resources:
requests:
memory: "512Mi"
limits:
memory: "512Mi" # Equal to request = Guaranteed QoSBenefits:
- Guaranteed QoS class (least likely to be evicted)
- Predictable scheduling
- No OOMKill surprises
Verify QoS class:
kubectl get pod <pod-name> -o jsonpath='{.status.qosClass}'Memory vs CPU Enforcement:
- Memory is non-compressible: Exceeding limit → OOMKilled (exit code 137)
- CPU is compressible: Exceeding limit → throttled, not killed
- Best practice: Set memory requests = limits, but allow CPU bursting
OOMKilled Detection:
kubectl describe pod <pod-name>
# Look for: Reason: OOMKilled, Exit Code: 137Pod Eviction vs OOMKill:
- OOMKill: Container exceeds its own limit → instant kill
- Eviction: Node runs low on memory → kubelet evicts pods by QoS priority
QoS Classes (eviction priority):
1. Guaranteed (requests = limits): Evicted last
2. Burstable (requests < limits): Evicted middle
3. BestEffort (no resources): Evicted first
Namespace Isolation: Create separate namespaces with different LimitRanges for different workload types (dev/staging/prod).
Service port already allocated
How to fix "Service port already allocated" in Kubernetes
minimum cpu usage per Container
How to fix "minimum cpu usage per Container" in Kubernetes
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA