This error occurs when the proxy or ingress controller cannot connect to the upstream service. Fix it by verifying the application listens on 0.0.0.0, checking service selectors match pod labels, and ensuring pods are healthy and ready.
The "upstream connect error or disconnect/reset before headers" error indicates that the proxy (Envoy, NGINX, or Istio sidecar) successfully resolved the service but failed to establish a connection to the upstream pod. The connection was either refused, reset, or timed out before receiving any response headers. This is one of the most common networking issues in Kubernetes environments using service meshes or ingress controllers. It means traffic reached the proxy layer but couldn't complete the connection to the actual application pod. Common causes include applications binding to localhost instead of all interfaces, pods crashing or being unhealthy, port mismatches between service and container, or expired mTLS certificates in service mesh configurations.
Check what address the application is binding to:
# Check listening ports in the pod
kubectl exec -it <pod-name> -- netstat -tlnp
# or
kubectl exec -it <pod-name> -- ss -tlnpLook for 0.0.0.0:8080 (correct), not 127.0.0.1:8080 (problem).
Fix in application configuration:
# For Node.js
ENV HOST=0.0.0.0
# In application code
server.listen(8080, '0.0.0.0')Check that service endpoints are populated:
# Get service selector
kubectl get service my-service -o yaml | grep -A3 selector
# Check pod labels
kubectl get pods --show-labels
# Verify endpoints exist
kubectl get endpoints my-service
kubectl get endpointslices -l kubernetes.io/service-name=my-serviceIf endpoints are empty, update labels to match:
# Service selector
spec:
selector:
app: my-app # Must match pod labels
# Pod labels
metadata:
labels:
app: my-app # Must match service selectorVerify pods are healthy and passing probes:
# Check pod status
kubectl get pods -o wide
# Look for probe failures
kubectl describe pod <pod-name> | grep -A5 "Readiness"
# Check events for crash reasons
kubectl get events --field-selector involvedObject.name=<pod-name>Configure proper readiness probe:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3Ensure ports align between container, service, and ingress:
# Check container port
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].ports}'
# Check service ports
kubectl get service my-service -o yaml | grep -A5 ports
# Test connectivity directly
kubectl port-forward <pod-name> 8080:8080
curl http://localhost:8080/healthCorrect configuration:
# Container
ports:
- containerPort: 8080 # App listens here
# Service
ports:
- port: 80 # Service exposes this
targetPort: 8080 # Routes to container portCheck proxy logs for connection errors:
# NGINX Ingress logs
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller | grep "upstream"
# Istio sidecar logs
kubectl logs <pod-name> -c istio-proxy | grep "upstream"
# Envoy admin interface (if available)
kubectl port-forward <pod-name> 19000:19000
# Visit http://localhost:19000/config_dumpLook for:
- "connection refused" - port not listening
- "connection reset" - app crashed
- "no healthy upstream" - all pods unhealthy
For Istio/Envoy, configure retry and timeout policies:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-service
spec:
hosts:
- my-service
http:
- timeout: 30s
retries:
attempts: 3
perTryTimeout: 10s
retryOn: "5xx,reset,connect-failure"
route:
- destination:
host: my-service
port:
number: 8080For NGINX Ingress, use annotations:
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "30"Connection Pool Configuration (Istio/Envoy):
trafficPolicy:
connectionPool:
http:
http1MaxPendingRequests: 100
http2MaxRequests: 100
tcp:
maxConnections: 100Outlier Detection (passive health checks):
outlierDetection:
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 30sCommon Patterns:
- Pod-to-pod works but ingress fails → Check ingress backend protocol
- Works initially then fails → Check resource limits, OOM kills
- Intermittent under load → Connection pool exhaustion, scale pods
Backend Protocol: If backend serves HTTPS:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm