This error occurs when network connectivity fails between pods, services, or external endpoints. Fix it by verifying IP forwarding settings, checking DNS resolution, reviewing network policies, and ensuring CNI plugins are functioning correctly.
The "connect: connection timed out" error indicates that a network connection attempt failed to establish within the expected time limit. In Kubernetes, this typically means a pod cannot reach another pod, service, or external endpoint due to networking issues. This error can occur at multiple layers: DNS resolution failures prevent hostname lookup, IP forwarding disabled on nodes blocks inter-pod traffic, network policies restrict communication, or CNI plugin misconfiguration prevents pod networking entirely. The timeout behavior distinguishes this from "connection refused" (target reachable but not listening) — with timeouts, packets never receive a response, suggesting routing, firewall, or fundamental connectivity issues.
Check if IP forwarding is disabled (common cause):
# SSH to affected node and check
sysctl net.ipv4.ip_forward
# Output: 0 means disabled (problematic)Enable IP forwarding:
# Enable immediately
sudo sysctl -w net.ipv4.ip_forward=1
# Persist across reboots
echo "net.ipv4.ip_forward=1" | sudo tee /etc/sysctl.d/99-ipforward.conf
sudo sysctl -p /etc/sysctl.d/99-ipforward.confBridge netfilter is required for iptables rules on Linux bridges:
# Enable on each node
sudo sysctl -w net.bridge.bridge-nf-call-iptables=1
sudo sysctl -w net.bridge.bridge-nf-call-ip6tables=1
# Persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/99-bridge-nf.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF
sudo sysctl -p /etc/sysctl.d/99-bridge-nf.confCreate a debug pod with diagnostic tools:
kubectl run -it --rm debug --image=nicolaka/netshoot --restart=Never -- bash
# Inside pod, test DNS resolution
nslookup kubernetes.default
# Should show IP, not "connection timed out"
# Check resolv.conf
cat /etc/resolv.conf
# Should include nameserver pointing to CoreDNSIf DNS fails, check CoreDNS directly:
# Get CoreDNS pod IP
kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
# Query CoreDNS directly from debug pod
nslookup kubernetes.default <COREDNS_POD_IP>Check CoreDNS pod status:
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system -l k8s-app=kube-dns --tail=50If CoreDNS is overwhelmed, scale it up:
kubectl scale --replicas=3 -n kube-system deployment/corednsVerify CoreDNS is receiving queries:
kubectl logs -n kube-system -l k8s-app=kube-dns -fFrom a debug pod, test both DNS and direct IP:
# Test by service name (uses DNS)
curl -v --connect-timeout 5 http://my-service:8080/health
# Get service cluster IP
kubectl get service my-service
# Test by cluster IP directly (bypasses DNS)
curl -v --connect-timeout 5 http://10.100.225.223:8080/healthInterpretation:
- Direct IP works but service name fails → DNS issue
- Both fail → network policy or service not running
List network policies affecting traffic:
kubectl get networkpolicies --all-namespaces
kubectl describe networkpolicy <policy-name> -n <namespace>If policies exist, ensure DNS egress is allowed:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53Use tcpdump on worker node to verify traffic:
# On node, capture traffic to pod IP
sudo tcpdump -i any -n host 10.244.1.25
# SYN without ACK = traffic blocked or port not listeningCNI Plugin Troubleshooting:
- Flannel: Check routes and IP forwarding; issues often stem from misconfigured routes
- Calico: Verify calico-node pods are running; check calicoctl node status
- AWS VPC CNI: Requires IAM permissions and available subnet IPs
Pod-to-Pod vs Pod-to-Service:
- Pod-to-pod uses CNI plugin directly (overlay network)
- Pod-to-service goes through kube-proxy + iptables + CoreDNS
- If pod-to-pod works but pod-to-service times out → DNS or kube-proxy issue
- If both fail → CNI plugin or IP forwarding issue
Performance Tuning:
- Install Node Local DNS Cache to reduce latency
- Set appropriate ndots value in pod resolv.conf (default 5 causes extra DNS queries)
- Consider scaling CoreDNS based on cluster size
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm