Mock Exam:2 Getting an connection refused error

Hi Team,

I am getting an mentioned error.

student-node ~ ➜ k get po -A
The connection to the server cluster4-controlplane:6443 was refused - did you specify the right host or port?

First in the kubelet live logs it throws some etcd volume name issue then i have fixed it but again getting an below error:
“Jul 25 11:12:05 cluster4-controlplane kubelet[27727]: E0725 11:12:05.500797 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused”
Jul 25 11:12:05 cluster4-controlplane kubelet[27727]: E0725 11:12:05.501024 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:05 cluster4-controlplane kubelet[27727]: E0725 11:12:05.501224 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:05 cluster4-controlplane kubelet[27727]: E0725 11:12:05.501441 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:05 cluster4-controlplane kubelet[27727]: E0725 11:12:05.501460 27727 kubelet_node_status.go:531] “Unable to update node status” err=“update node status exceeds retry count”
Jul 25 11:12:10 cluster4-controlplane kubelet[27727]: E0725 11:12:10.804391 27727 event.go:355] “Unable to write event (may retry after sleeping)” err=“Patch "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/events/kube-scheduler-cluster4-controlplane.17e5701d7e737548\”: dial tcp 192.18.79.9:6443: connect: connection refused" event=“&Event{ObjectMeta:{kube-scheduler-cluster4-controlplane.17e5701d7e737548 kube-system 7738 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-cluster4-controlplane,UID:b86d856029a668abfc63fe1245b4deb0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod kube-scheduler-cluster4-controlplane_kube-system(b86d856029a668abfc63fe1245b4deb0),Source:EventSource{Component:kubelet,Host:cluster4-controlplane,},FirstTimestamp:2024-07-25 11:08:24 +0000 UTC,LastTimestamp:2024-07-25 11:09:43.959442809 +0000 UTC m=+382.897290416,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:cluster4-controlplane,}”
Jul 25 11:12:11 cluster4-controlplane kubelet[27727]: E0725 11:12:11.018305 27727 controller.go:145] “Failed to ensure lease exists, will retry” err=“Get "https://cluster4-controlplane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused" interval=“7s”
Jul 25 11:12:11 cluster4-controlplane kubelet[27727]: I0725 11:12:11.311421 27727 status_manager.go:853] “Failed to get status for pod” podUID=“52403efd175ee77383c3d28457d5ce55” pod=“kube-system/kube-apiserver-cluster4-controlplane” err=“Get "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-cluster4-controlplane\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:11 cluster4-controlplane kubelet[27727]: I0725 11:12:11.311732 27727 status_manager.go:853] “Failed to get status for pod” podUID=“5554b17f6fa1b188f6de3db4f098636a” pod=“kube-system/kube-controller-manager-cluster4-controlplane” err=“Get "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-cluster4-controlplane\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:11 cluster4-controlplane kubelet[27727]: I0725 11:12:11.312081 27727 status_manager.go:853] “Failed to get status for pod” podUID=“b86d856029a668abfc63fe1245b4deb0” pod=“kube-system/kube-scheduler-cluster4-controlplane” err=“Get "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-cluster4-controlplane\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: I0725 11:12:15.310786 27727 scope.go:117] “RemoveContainer” containerID=“7da8efc2cbfdb8c1cf4c727db2f9333171dda6730fc02b10e99b63e20e74d56f”
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: E0725 11:12:15.311257 27727 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-cluster4-controlplane_kube-system(52403efd175ee77383c3d28457d5ce55)"” pod=“kube-system/kube-apiserver-cluster4-controlplane” podUID=“52403efd175ee77383c3d28457d5ce55”
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: E0725 11:12:15.762905 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?resourceVersion=0&timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: E0725 11:12:15.763244 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: E0725 11:12:15.763574 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: E0725 11:12:15.763764 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: E0725 11:12:15.763944 27727 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "cluster4-controlplane": Get "https://cluster4-controlplane:6443/api/v1/nodes/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:15 cluster4-controlplane kubelet[27727]: E0725 11:12:15.763960 27727 kubelet_node_status.go:531] “Unable to update node status” err=“update node status exceeds retry count”
Jul 25 11:12:18 cluster4-controlplane kubelet[27727]: E0725 11:12:18.019026 27727 controller.go:145] “Failed to ensure lease exists, will retry” err=“Get "https://cluster4-controlplane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cluster4-controlplane?timeout=10s\”: dial tcp 192.18.79.9:6443: connect: connection refused" interval=“7s”
Jul 25 11:12:20 cluster4-controlplane kubelet[27727]: E0725 11:12:20.100104 27727 webhook.go:154] Failed to make webhook authenticator request: Post “https://cluster4-controlplane:6443/apis/authentication.k8s.io/v1/tokenreviews”: dial tcp 192.18.79.9:6443: connect: connection refused
Jul 25 11:12:20 cluster4-controlplane kubelet[27727]: E0725 11:12:20.100186 27727 server.go:310] “Unable to authenticate the request due to an error” err=“Post "https://cluster4-controlplane:6443/apis/authentication.k8s.io/v1/tokenreviews\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:20 cluster4-controlplane kubelet[27727]: E0725 11:12:20.805798 27727 event.go:355] “Unable to write event (may retry after sleeping)” err=“Patch "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/events/kube-scheduler-cluster4-controlplane.17e5701d7e737548\”: dial tcp 192.18.79.9:6443: connect: connection refused" event=“&Event{ObjectMeta:{kube-scheduler-cluster4-controlplane.17e5701d7e737548 kube-system 7738 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-cluster4-controlplane,UID:b86d856029a668abfc63fe1245b4deb0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod kube-scheduler-cluster4-controlplane_kube-system(b86d856029a668abfc63fe1245b4deb0),Source:EventSource{Component:kubelet,Host:cluster4-controlplane,},FirstTimestamp:2024-07-25 11:08:24 +0000 UTC,LastTimestamp:2024-07-25 11:09:43.959442809 +0000 UTC m=+382.897290416,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:cluster4-controlplane,}”
Jul 25 11:12:21 cluster4-controlplane kubelet[27727]: I0725 11:12:21.311235 27727 status_manager.go:853] “Failed to get status for pod” podUID=“52403efd175ee77383c3d28457d5ce55” pod=“kube-system/kube-apiserver-cluster4-controlplane” err=“Get "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-cluster4-controlplane\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:21 cluster4-controlplane kubelet[27727]: I0725 11:12:21.311503 27727 status_manager.go:853] “Failed to get status for pod” podUID=“5554b17f6fa1b188f6de3db4f098636a” pod=“kube-system/kube-controller-manager-cluster4-controlplane” err=“Get "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-cluster4-controlplane\”: dial tcp 192.18.79.9:6443: connect: connection refused"
Jul 25 11:12:21 cluster4-controlplane kubelet[27727]: I0725 11:12:21.311757 27727 status_manager.go:853] “Failed to get status for pod” podUID=“b86d856029a668abfc63fe1245b4deb0” pod=“kube-system/kube-scheduler-cluster4-controlplane” err=“Get "https://cluster4-controlplane:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-cluster4-controlplane\”: dial tcp 192.18.79.9:6443: connect: connection refused""

But while trying to to take the backup of db am getting an mentioned error.

student-node ~ :heavy_multiplication_x: ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd-boot-cka18-trb.db
-su: etcdctl: command not found

Please help me and if require to install the etcd in the student-node just share the steps to follow.

Which course is this for? I assume it is for the CKA “Ultimate Mock Exam”, but it would be good to be sure about that. If so, it suggests that you’ve put etcd.yaml into a bad state. Also: please add the Question Number.