OOMKilled (exit code 137) means the Linux kernel terminated your container for exceeding its memory limit. Increase memory limits or optimize your application memory usage.
OOMKilled indicates that the Linux kernel's Out-Of-Memory (OOM) killer terminated a container because it exceeded its allocated memory limit. The container receives SIGKILL (signal 9), resulting in exit code 137 (128 + 9). This is a hard limit enforcement—unlike CPU throttling, memory overuse results in immediate termination. Importantly, OOMKilled applies to individual containers, not entire pods. If a pod has multiple containers, only the container that exceeded its limit is killed. The pod may continue running with remaining containers and will restart the killed container according to its restart policy.
Check the container termination status:
kubectl describe pod <pod-name>Look for the Last State section showing:
Last State: Terminated
Reason: OOMKilled
Exit Code: 137Also check kubectl get pod <pod-name> -o jsonpath='{.status.containerStatuses[*].lastState}'
View the current resource configuration:
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].resources}'Monitor actual memory usage:
kubectl top pod <pod-name>Note: kubectl top requires metrics-server to be installed in your cluster.
Update your deployment with higher limits based on observed usage:
resources:
requests:
memory: "512Mi"
limits:
memory: "1Gi"Set requests to typical usage and limits to handle peak load. A common starting point is limits = 1.5x to 2x the requests value.
Apply changes: kubectl apply -f deployment.yaml
JVM heap is only part of Java memory usage. Account for metaspace, threads, and native memory:
env:
- name: JAVA_OPTS
value: "-XX:MaxRAMPercentage=75.0 -XX:+UseContainerSupport"Or set explicit heap limits lower than container limits:
env:
- name: JAVA_OPTS
value: "-Xmx768m -Xms512m" # For a 1Gi container limitLeave at least 25% of container memory for non-heap usage.
If limits seem adequate, the application may have a memory leak. Enable profiling:
# For Java, generate heap dump on OOM
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdump.hprofFor other languages, use appropriate profiling tools. Monitor memory growth over time with Prometheus/Grafana to identify leak patterns.
Set identical requests and limits to get Guaranteed QoS, which has the lowest OOM score:
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "500m"Guaranteed pods are killed last when a node is under memory pressure. BestEffort pods (no limits) are killed first.
Understanding QoS classes helps prevent OOMKilled: Guaranteed pods (requests = limits) have the lowest oom_score_adj (-997), Burstable pods (requests < limits) have medium priority, and BestEffort pods (no requests/limits) are killed first.
On EKS, use CloudWatch Container Insights to track memory trends. On GKE, note that the Cloud Console memory graphs may differ from kubectl top—always verify with kubectl for accurate readings. On AKS, the "Diagnose and Solve Problems" blade can help identify memory patterns.
For applications with unpredictable memory spikes, consider using Vertical Pod Autoscaler (VPA) to automatically adjust limits based on observed usage. However, be aware that VPA currently requires pod restarts to apply changes.
Node-level memory pressure can also trigger OOM kills even if individual containers are within limits. Monitor node memory with kubectl describe node and look for MemoryPressure conditions.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm