A 504 Gateway Timeout in Kubernetes Ingress occurs when the NGINX controller cannot receive a response from backend services before the timeout expires. This commonly happens with slow applications, database operations, or when timeout settings don't match your workload requirements.
A 504 Gateway Timeout error indicates that your Kubernetes Ingress controller (typically NGINX) successfully contacted a backend pod, but the pod failed to respond within the configured timeout window (default 60 seconds). This is different from a 502 Bad Gateway, which means no backend pods are available at all. The error occurs at the load balancer layer when the upstream application takes too long to process a request. The Ingress controller gives up waiting and returns a 504 error to the client, even if the backend pod is still working on the request.
Run kubectl logs -n ingress-nginx <controller-pod-name> to see if timeouts are the issue. Look for "upstream timed out" messages. Replace <controller-pod-name> with the actual NGINX Ingress controller pod running in the ingress-nginx namespace.
Run kubectl get pods -o wide to see all pods and their status. Check readiness probes: kubectl describe pod <pod-name>. Test direct pod connectivity with kubectl exec -it <pod-name> -- curl http://localhost:<port>/<path> to ensure the pod responds properly.
Edit the Ingress NGINX ConfigMap: kubectl edit configmap nginx-configuration -n ingress-nginx. Add or update these values (in seconds, no "s" suffix):
proxy-read-timeout: "300"
proxy-connect-timeout: "10"
proxy-send-timeout: "300"
upstream-fail-timeout: "10s"Save and the controller will automatically reload. Test with kubectl rollout restart deployment -n ingress-nginx.
For specific Ingress resources, add annotations without touching the global config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
spec:
# ... rest of Ingress configApply with kubectl apply -f ingress.yaml.
For Classic Load Balancer, add to Service annotation:
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "300"For Application Load Balancer (ALB), add to Ingress:
metadata:
annotations:
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=300Apply changes with kubectl apply -f and verify with AWS console.
If your application has slow startup, increase probe timeouts: kubectl edit deployment <deployment-name>. Update the probe configuration:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
failureThreshold: 3Increase initialDelaySeconds and periodSeconds to allow proper pod startup detection.
Use Prometheus metrics or application logs to identify slow endpoints. Run kubectl exec <pod> -- curl -w "@curl-format.txt" http://localhost:8080/endpoint to measure response times. Optimize your application code or database queries if endpoints consistently approach timeout limits.
For long-running operations like file uploads or batch processing, consider implementing an asynchronous pattern with webhooks instead of expecting synchronous responses within strict timeout windows. On AWS EKS, the load balancer timeout is separate from Ingress timeoutโboth must be configured. For Azure AKS with Traefik, configure forwardAuth.responsHeaderTimeout and transport.respondingTimeouts. In CI/CD pipelines (GitHub Actions, GitLab Runner), check if orchestration platform timeouts conflict with Kubernetes timeouts. WSL2 environments require timeout settings to propagate from Linux layer. Consider using Kubernetes NetworkPolicy to reduce latency between Ingress controller and backend pods.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm