The Ingress controller cannot reach healthy backend pods because they are failing health checks, restarting, or not listening on the expected port. A 502 indicates the gateway received an invalid response from the upstream service, typically due to readiness probe failures, port mismatches, or pods crashing. Fix by ensuring backend pods are healthy and ports match the service configuration.
A 502 Bad Gateway error from an Ingress controller (nginx, traefik, etc.) means the Ingress received traffic but couldn't reach a healthy backend pod. The backend service exists but all pods are either not ready (readiness probe failing), crashing, or not listening on the configured port. This differs from 503 (no endpoints) in that endpoints exist but are unhealthy.
Examine pod readiness:
kubectl get pods -n <namespace> -o wide
kubectl describe pod <pod-name> -n <namespace> | grep -A10 "Conditions"Look for "Ready" condition. If False, readiness probe is failing. Check:
kubectl describe pod <pod-name> -n <namespace> | grep -A5 "Readiness probe"Check the full port chain:
# Ingress backend service
kubectl get ingress <ingress-name> -n <namespace> -o yaml | grep -A10 "backend:"
# Service port configuration
kubectl get service <service-name> -n <namespace> -o yaml | grep -A5 "ports:"
# Pod listening port
kubectl exec <pod-name> -n <namespace> -- netstat -tlnp | grep LISTENAll three must align: ingress → service targetPort → pod containerPort.
Review pod logs:
kubectl logs <pod-name> -n <namespace> --tail=100
kubectl logs <pod-name> -n <namespace> -p # Previous pod (if crashed)Look for startup errors, port binding failures, or exceptions.
If application starts slowly, increase timeouts:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10 # Increased from 0
timeoutSeconds: 3 # Increased from 1
periodSeconds: 5
failureThreshold: 2Redeploy and monitor.
Resource starvation causes timeouts and hangs:
kubectl top pod <pod-name> -n <namespace>
kubectl describe node <node-name> | grep -A10 "Allocated resources"Increase resource requests/limits:
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1GiPort-forward and test the readiness endpoint:
kubectl port-forward pod/<pod-name> 8080:8080 -n <namespace> &
curl -v http://localhost:8080/healthIf endpoint returns error or hangs, fix the application health check logic.
Large responses may exceed proxy buffer size:
kubectl describe ingress <ingress-name> -n <namespace>For nginx-ingress, add annotation:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"
nginx.ingress.kubernetes.io/proxy-send-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "30"Unlike 503, a 502 means the Ingress controller found endpoints but they're unhealthy. Check Ingress controller logs: kubectl logs -n ingress-nginx deployment/nginx-ingress-controller. Use tcpdump on pod to verify traffic arrives: sudo tcpdump -i any port 8080. For connection pools, ensure backend isn't rejecting connections due to queue limits. Graceful shutdown is critical; ensure pods have preStop hooks to drain connections before termination.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm