Deployment pods stay in not-ready state due to failing health checks, application crashes, or resource constraints. Fix by debugging pods, adjusting probe settings, fixing application issues, or increasing resources.
A deployment is considered "not ready" when fewer pods are in Ready state than the desired replica count. This means: 1. Pods are created and running 2. Their readiness probes are failing 3. Traffic doesn't route to them 4. Your deployment is effectively offline Unlike pending pods, NotReady pods indicate application-level issues, not scheduling issues.
kubectl describe deployment -n <namespace> <name>
kubectl get pods -n <namespace> -o wide
kubectl logs -n <namespace> <pod-name>kubectl edit deployment -n <namespace> <name>
# Increase readiness probe delay:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60 # Increased# Check environment variables:
kubectl describe pod -n <namespace> <pod-name>
# Check application logs:
kubectl logs -n <namespace> <pod-name> --previouskubectl edit deployment -n <namespace> <name>
# Increase resources:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"kubectl get deployment -n <namespace> <name>
# READY should show N/NFor debugging, disable readiness probe temporarily by setting failureThreshold to 999999.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm