Pod or service DNS lookups fail because CoreDNS is not running, network policies block DNS traffic (UDP/53), or the service doesn't exist. Applications cannot reach services by hostname, requiring fully qualified domain names or direct IP addresses as workarounds. Fix by verifying CoreDNS is running, checking network policies, and ensuring proper DNS configuration.
Kubernetes uses CoreDNS to resolve service names to cluster IPs. When DNS resolution fails, pods cannot reach services by hostname (e.g., myservice or myservice.namespace). This happens when CoreDNS crashes, network policies block UDP/53, Alpine Linux containers lack DNS tools, or the service name is wrong. DNS is critical for service discovery in Kubernetes.
Check DNS pods in kube-system namespace:
kubectl get pods -n kube-system | grep coredns
kubectl describe deployment coredns -n kube-system
kubectl logs -n kube-system -l k8s-app=kube-dnsIf not running or restarting, check resource limits and node disk space.
Run an interactive debug pod and test DNS:
kubectl run -it --rm debug --image=ubuntu --restart=Never -- bash
apt-get update && apt-get install -y dnsutils
nslookup kubernetes.default
nslookup myservice.default.svc.cluster.localIf nslookup fails, DNS is misconfigured. If it works, the issue is application-level.
Network policies may restrict UDP/53 (DNS):
kubectl get networkpolicies -A
kubectl describe networkpolicy <policy-name> -n <namespace>If policies exist, ensure they allow UDP to port 53 for CoreDNS pods in kube-system. Example allow rule:
ingress:
- protocol: UDP
ports:
- port: 53
from:
- namespaceSelector: {}Check pod's DNS configuration:
kubectl exec -it <pod-name> -n <namespace> -- cat /etc/resolv.confShould show:
nameserver 10.96.0.10 # CoreDNS service IP (default, varies by cluster)
search default.svc.cluster.local svc.cluster.local cluster.localIf missing or incorrect, restart pod or check dnsPolicy in pod spec.
Inspect CoreDNS configuration:
kubectl get configmap coredns -n kube-system -o yamlLook for Corefile section. Ensure zones are configured correctly. If using custom DNS, verify forwarders are correct and accessible.
Alpine minimal images lack DNS utilities:
kubectl exec <pod-name> -n <namespace> -- apk add --no-cache bind-tools
kubectl exec <pod-name> -n <namespace> -- nslookup myserviceOr use a base image with dnsutils (ubuntu, debian, or node:alpine with libc).
Force CoreDNS pods to restart:
kubectl rollout restart deployment/coredns -n kube-system
kubectl get pods -n kube-system -w | grep corednsWait for pods to return to Running state. This clears cache and reconnects to upstream DNS.
CoreDNS uses a cache (TTL-based); stale entries may persist. For external DNS forwarders, ensure they are reachable and responsive. In air-gapped environments, configure local DNS. Some CNI plugins (Weave, Flannel) provide DNS; verify compatibility. Custom CoreDNS plugins can interfere with resolution; check Corefile plugins list. For production, monitor CoreDNS CPU/memory; high load causes slow responses and timeout-like behavior.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm