The Ingress controller cannot find any healthy backend endpoints because no pods are ready, the service doesn't exist, or pod counts are zero. A 503 indicates the service has no available endpoints to route traffic to, requiring either pod startup, service creation, or endpoint verification. Fix by ensuring pods are running and ready, and verifying service endpoints exist.
A 503 Service Unavailable error from an Ingress controller means the service has no healthy endpoints. This differs from 502 (unhealthy endpoints exist) in that no endpoints are available at all. Causes include: no pods are running, all pods are not ready, the service selector doesn't match pod labels, or the backend service doesn't exist.
Check if service has any endpoints:
kubectl get endpoints <service-name> -n <namespace>
kubectl describe service <service-name> -n <namespace>If Endpoints shows <none>, no pods are backing this service. Proceed to check pod status.
Verify deployment is running pods:
kubectl get deployment <deployment-name> -n <namespace>
kubectl describe deployment <deployment-name> -n <namespace>Look for "Replicas" (desired vs current). If replicas are 0 or current < desired, scale up:
kubectl scale deployment <deployment-name> --replicas=2 -n <namespace>Find which pods should be handling traffic:
kubectl get pods -n <namespace> --show-labels
kubectl get pods -n <namespace> -l <service-selector-key>=<service-selector-value>If no pods match the service selector, the labels are wrong.
Compare labels:
kubectl get service <service-name> -n <namespace> -o yaml | grep -A3 selector
kubectl describe pod <pod-name> -n <namespace> | grep LabelsLabels must match exactly (case-sensitive, including values).
Fix with:
kubectl label pod <pod-name> app=myapp -n <namespace> --overwriteExamine each pod:
kubectl get pods -n <namespace> -o wide
kubectl describe pod <pod-name> -n <namespace>Check:
- Phase (should be Running)
- Ready condition (should be True)
- Readiness probe status
If pods are Pending, check for scheduling issues:
kubectl describe pod <pod-name> -n <namespace> | grep -A10 "Events:"Verify deployment can schedule pods:
kubectl describe deployment <deployment-name> -n <namespace>
kubectl get events -n <namespace> --sort-by='.lastTimestamp' | tail -20Common issues:
- Image pull errors (wrong image, no credentials)
- Resource constraints (not enough CPU/memory)
- Node selector or affinity mismatch
Check node availability:
kubectl get nodes
kubectl describe node <node-name>If deployment was just created, pods may still be starting:
kubectl rollout status deployment/<deployment-name> -n <namespace>
kubectl wait --for=condition=ready pod -l app=<label-value> --timeout=300s -n <namespace>Wait for "deployment "..." successfully rolled out" message.
A 503 can also be temporary during deployments. Use kubectl rollout status to monitor. For services with external traffic, use readiness gates or minAvailable in PodDisruptionBudget to prevent all pods going down. Distinguish between 503 (no endpoints) and 502 (bad endpoints) to troubleshoot faster. Check Ingress controller logs for exact backend status: kubectl logs -n ingress-nginx deployment/nginx-ingress-controller | grep upstream.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm