CoreDNS detected a DNS query loop, usually caused by misconfigured forwarders or upstreams. This causes DNS to fail. Disable loop detection, fix forwarder configuration, or review Corefile settings.
DNS loops occur when CoreDNS forwards queries in a circle, eventually returning to itself. This typically happens when CoreDNS forwards to an upstream that sends queries back to CoreDNS, or when systemd-resolved or other resolvers create circular dependencies. The loop detection plugin drops these queries, breaking DNS resolution.
First, verify if systemd-resolved is the root cause:
cat /etc/resolv.conf | head -5Look for nameserver entries like 127.0.0.53. If present, systemd-resolved is involved.
Find the actual resolv.conf file with real upstream nameservers:
# On systemd-resolved systems, find the real config:
systemctl status systemd-resolved
ls -la /etc/resolv.conf
# Check if it's a symlink (likely on systemd-resolved systems):
cat /run/systemd/resolve/resolv.confThis file usually contains the actual nameservers (e.g., 8.8.8.8, 1.1.1.1).
Edit the kubelet configuration file (typically at /etc/kubernetes/kubelet/kubelet-config.yaml or /etc/sysconfig/kubelet):
resolvConf: /run/systemd/resolve/resolv.confOr if using kubeadm, you can use the command line flag:
--resolv-conf=/run/systemd/resolve/resolv.confUse the real resolv.conf path you found in step 2.
Restart the kubelet service to pick up the new configuration:
sudo systemctl restart kubelet
# Verify it's running:
sudo systemctl status kubeletWait 30-60 seconds for CoreDNS pods to restart.
Check CoreDNS pod status:
kubectl get pods -n kube-system -l k8s-app=kube-dns
# Or for newer versions:
kubectl get pods -n kube-system -l k8s-app=kube-dns -o wideAll CoreDNS pods should be in Running status.
Check CoreDNS logs for the loop error:
kubectl logs -n kube-system -l k8s-app=kube-dns | tail -20Verify DNS works by launching a test pod:
kubectl run -it --rm debug --image=nicolaka/netshoot --restart=Never -- sh
# Inside the pod:
nslookup kubernetes.default
nslookup google.comIf both resolve successfully, the fix is complete.
If you initialized the cluster with kubeadm and still have issues:
# Check kubeadm detected systemd-resolved:
grep resolv /etc/kubernetes/kubelet.conf
# Manually fix all nodes:
sudo kubeadm init phase etcd init
# Restart kubelets on all nodeskubeadm version 1.10+ automatically detects and configures systemd-resolved.
Test full DNS chain from multiple pods:
# Test internal DNS:
kubectl run test -it --rm --image=alpine -- nslookup kubernetes.default
# Test external DNS:
kubectl run test -it --rm --image=alpine -- nslookup google.com
# Check CoreDNS endpoints:
kubectl get endpoints -n kube-system kube-dnsAll services should resolve without errors.
### systemd-resolved Details
systemd-resolved is Linux's modern DNS caching system. It puts its resolver at 127.0.0.53:53 and manages /etc/resolv.conf. This conflicts with Kubernetes because:
1. CoreDNS reads /etc/resolv.conf as upstream servers
2. If that file points to 127.0.0.53, CoreDNS becomes its own upstream
3. Every query bounces between CoreDNS and itself infinitely
### Alternative: Disable systemd-resolved
For testing only (not recommended in production):
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolvedThen restore /etc/resolv.conf with real nameservers.
### kubeadm Automatic Detection
kubeadm 1.10+ automatically:
- Detects systemd-resolved
- Sets --resolv-conf=/run/systemd/resolve/resolv.conf automatically
- Applies this to all nodes during initialization
### Containerd vs Docker
This issue affects any container runtime. Both containerd and Docker will inherit the kubelet's resolv.conf configuration.
### Multi-node Cluster Fixes
For control plane and worker nodes:
1. Update kubelet config on all nodes
2. Restart kubelet on all nodes
3. CoreDNS will auto-restart on updated nodes
4. Monitor CoreDNS pod restart timeline across nodes
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes