Can someone explain this in network troubleshooting Another cause for *CoreDNS* . . .

kanchana:
can someone explain this in network troubleshooting
Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in Kubernetes detects a loop.

There are many ways to work around this issue, some are listed here:

• Add the following to your kubelet config yaml: resolvConf: <path-to-your-real-resolv-conf-file> This flag tells kubelet to pass an alternate resolv.conf to Pods. For systems using systemd-resolved, /run/systemd/resolve/resolv.conf is typically the location of the “real” resolv.conf, although this can be different depending on your distribution.
• Disable the local DNS cache on host nodes, and restore /etc/resolv.conf to the original.
• A quick fix is to edit your Corefile, replacing forward . /etc/resolv.conf with the IP address of your upstream DNS, for example forward . 8.8.8.8. But this only fixes the issue for CoreDNS, kubelet will continue to forward the invalid resolv.conf to all default dnsPolicy Pods, leaving them unable to resolve DNS.

Alistair Mackay:
Hi,

The loop is caused if /etc/resolv.conf contains an entry for 127.0.0.53 which refers to a DNS service on the local machine. CoreDNS pod interprets this as an upstream server to forward queries to, and that forward will end up back with CoreDNS so an infinite loop would ensue. CoreDNS detects this and bombs out with a fatal error - hence the CrashLoopBackoff

[FATAL] plugin/loop: Loop (127.0.0.1:49048 -&gt; :53) detected for zone ".", ...

On systemd distros (which is most recent Linux of all flavors), systemd-resolvd creates /run/systemd/resolve/resolv.conf at startup and this config is generally a better choice.

Note also 8.8.8.8 is the address of Google’s DNS service and is a good choice for an upstream server. On a corporate network you would use the corporate DNS server for upstream resolution. The Networks team would provide the IP address of it, though in most cases the team deploying servers would ensure a correct resolv.conf.

An upstream server is another DNS server where the local DNS (e.g. CoreDNS) forwards queries it cannot answer itself.

See
https://serverfault.com/questions/1081862/coredns-failing-due-to-a-loop-how-to-feed-kubelet-with-proper-resolvconf
https://coredns.io/plugins/loop/

kanchana:
thanks but in the kubeapi server config yaml file we do have DNS server ip address, if we feed in that will it not take that server ip address as DNS .

kanchana:
please confirm

Alistair Mackay:
You mean the kubelet config yaml, right?

The clusterDNS setting would, if set, normally be pointed to the clusterIP address of the kube-dns service, which is the kubernetes service that fronts CoreDNS, which you can find with

kubectl get service kube-dns -n kube-system