MemoryPressure indicates the kubelet node has insufficient memory available for workload scheduling. Pods cannot be scheduled or existing pods are evicted when available memory falls below the eviction threshold. This is a resource allocation issue that prevents cluster utilization.
When a Kubernetes node experiences memory pressure, the kubelet sets its MemoryPressure condition to True. This signals that available memory on the node has fallen below the eviction threshold (default 100Mi), causing the kubelet to refuse new pod scheduling or evict existing pods. The node becomes cordoned off for new workloads and runs the risk of killing running containers to reclaim memory. Memory pressure is measured by the kubelet through `kubelet --eviction-hard=memory.available<100Mi`. If the node runs out of memory, processes may be killed by the system OOM killer, destroying pod guarantees.
Check node status:
kubectl get nodes
kubectl describe node <node-name>Look for "MemoryPressure" with True status. The describe output shows:
- Available memory
- AllocatedResources (CPU and memory reserved by pods)
- Conditions section with timestamps
List pods on the affected node and their actual memory usage:
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node-name>
kubectl top pods -A --sort-by=memory | grep <node-name>Compare actual usage (from kubectl top) to requested limits. Identify which pod(s) consume the most memory.
For each high-memory pod:
kubectl describe pod <pod-name> -n <namespace>Check the Limits and Requests sections. If a pod requests 2Gi but uses 4Gi, you have either:
1. Underestimated resource request (normal workload needs more memory)
2. Memory leak (application consuming unbounded memory over time)
Check logs for memory-related warnings:
kubectl logs <pod-name> -n <namespace> | grep -i memorySSH into the node and check memory allocation:
df -h # Check disk (often uses page cache memory)
free -h # Show memory breakdown
ps aux --sort=-%mem | head -20 # Top memory-consuming processesLook for:
- containerd or docker processes using excessive memory
- System caches (page cache in "cached" column)
- Kernel modules or systemd services consuming memory
If the node has 16Gi total and kubelet reserves 1Gi for system, only 15Gi is available to pods.
Check current eviction thresholds:
kubectl describe node <node-name> | grep -A5 Allocatable
kubectl get --raw /api/v1/nodes/<node-name>/proxy/configz | grep -A5 evictionDefault: --eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%
If eviction triggers too early, adjust on the kubelet:
# On the node, edit /etc/kubernetes/kubelet.env or systemd drop-in
--eviction-hard=memory.available<500Mi # More conservative
--eviction-soft=memory.available<1Gi # Graceful eviction threshold
--eviction-soft-grace-period=memory.available=1mRestart kubelet after changes.
Option A: Move pods to another node:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
kubectl cordon <node-name> # Prevent new schedulingPods are rescheduled to other nodes. Then:
kubectl uncordon <node-name> # Re-enable after fixing memoryOption B: Scale down deployments to reduce memory footprint:
kubectl scale deployment <name> --replicas=1Option C: Add a new node to the cluster with higher memory capacity.
Option D: Increase memory on the existing node (VM resize in cloud provider).
If a specific pod consumes ever-increasing memory:
1. Check application logs for warnings:
kubectl logs <pod-name> --tail=1002. Profile memory inside container:
kubectl exec -it <pod-name> -- top # Real-time process memory
kubectl exec -it <pod-name> -- free -h # Container memory view3. Common causes in applications:
- Unbounded caches without TTL
- Accumulated data in lists/maps (connection pools, request queues)
- File handles not closed
- Third-party library memory leak
4. Solutions:
- Add memory limits to trigger pod restart:
resources:
limits:
memory: "512Mi"- Implement periodic cleanup or cache eviction
- Use memory profiler (Python: memory_profiler, Node: node --prof)
- Update library to latest bug-fix version
Set up cluster-wide monitoring:
kubectl get nodes -o custom-columns=NAME:.metadata.name,MEMORY:.status.allocatable.memory,PRESSURE:.status.conditions[?(@.type=="MemoryPressure")].statusEnable automated remediation:
- Descheduler: Evicts pods from overcommitted nodes
- VPA (Vertical Pod Autoscaler): Recommends optimal resource requests
- HPA (Horizontal Pod Autoscaler): Scales replicas based on metrics
Set resource requests and limits on all deployments:
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"Monitor with Prometheus and alert on high memory usage:
kubelet_node_status_condition{condition="MemoryPressure",status="true"}Memory pressure handling varies by distribution: EKS uses custom eviction settings, AKS uses node pools with auto-repair, GKE integrates with Cloud Monitoring. For serverless platforms (Fargate), memory pressure is handled by the platform. In multi-tenant clusters, NetworkPolicy and PodSecurityPolicy should restrict resource abuse. Memory overcommitment is common in development but dangerous in productionโuse ResourceQuota per namespace to enforce limits: kubectl describe resourcequota -A. The kubelet may not immediately report MemoryPressure if memory reclamation (eviction) happens continuously. If nodes frequently hit memory pressure, it's a sign of insufficient cluster capacity or workload sizing issues. For StatefulSets, ensure PersistentVolume provisioning doesn't consume node memory unexpectedly. Watch for kernel memory usage growth (swap, buffers, page cache) that may not appear in container RSS/PSS metrics.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm