Hi All, Kubeapi server static pod is not getting created after configuring with . . .

Venkat:
Hi All, Kubeapi server static pod is not getting created after configuring with audit policy and other audit options as part of audit lab. Can you help understand whats wrong here?

Venkat:
I see below errors in kubelet logs

Nov 22 14:16:48 controlplane kubelet[2577]: E1122 14:16:48.148486    2577 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"controlplane\": Get \"<https://controlplane:6443/api/v1/nodes/controlplane?timeout=10s>\": dial tcp 10.47.227.3:6443: connect: connection refused"
Nov 22 14:16:48 controlplane kubelet[2577]: E1122 14:16:48.148853    2577 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"controlplane\": Get \"<https://controlplane:6443/api/v1/nodes/controlplane?timeout=10s>\": dial tcp 10.47.227.3:6443: connect: connection refused"
Nov 22 14:16:48 controlplane kubelet[2577]: E1122 14:16:48.149173    2577 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"controlplane\": Get \"<https://controlplane:6443/api/v1/nodes/controlplane?timeout=10s>\": dial tcp 10.47.227.3:6443: connect: connection refused"
Nov 22 14:16:48 controlplane kubelet[2577]: E1122 14:16:48.149457    2577 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"controlplane\": Get \"<https://controlplane:6443/api/v1/nodes/controlplane?timeout=10s>\": dial tcp 10.47.227.3:6443: connect: connection refused"
Nov 22 14:16:48 controlplane kubelet[2577]: E1122 14:16:48.149486    2577 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count"

Trung Tran:
Some tips:
• Always back up kube-apiserver.yaml file before modify, so we can revert back when something went wrong
• Sometimes, the static pod doesn’t restart after yaml file changed, restart the kubelet to trigger that (systemctl restart kubelet)
• Use crictl to watch for kube-apiserver container (crictl ps -a | grep api )

Venkat:
Thanks Trung. Even restoring original kube api server manifest file didnt help. Will retry the lab. Thank you

Venkat:
Restart of kubelet didnt help either

Trung Tran:
check the container of the apiserver with crictl, there should be an exited container, get the ID and then check the logs to see what when wrong.

Shwetha Shenoy V:
check the apiserver logs

Shwetha Shenoy V:
you can look for apiserver logs under /var/log/pods or /var/log/containers . That should tell you what exactly the issue with apiserver is.

Shwetha Shenoy V:
If there is an issue with config in apiserver, that should appear there. If there is no logs from apiserver, check syntax in audit policy yaml.

Venkat:
Thanks Trung and Shwetha

unnivkn:
Hi @Venkat fyr: https://github.com/kodekloudhub/community-faq/blob/main/docs/diagnose-crashed-apiserver.md

Venkat:
Thanks @unnivkn. That helps