Getting Error of Timeout when trying to initialize a local cluster using kubeadm initi

Please support to tell the reason why Kubelet is giving error. I used the below command to initialize the Cluster and logs can be seen below alongwith it:

vagrant@master-1:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.22.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-1] and IPs [10.96.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

Logs of Kubelet:

vagrant@master-1:~$ sudo journalctl -xeu kubelet
-- Automatic restarting of the unit kubelet.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Aug 15 13:09:02 master-1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit kubelet.service has finished shutting down.
Aug 15 13:09:02 master-1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is RESULT.
Aug 15 13:09:02 master-1 kubelet[30031]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Aug 15 13:09:02 master-1 kubelet[30031]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.660104   30031 server.go:440] "Kubelet version" kubeletVersion="v1.22.0"
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.660677   30031 server.go:868] "Client rotation is on, will bootstrap in background"
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.663250   30031 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-clie
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.664501   30031 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kuber
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.691644   30031 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaul
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.691838   30031 container_manager_linux.go:280] "Container manager verified user specified cgroup-root
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.691906   30031 container_manager_linux.go:285] "Creating Container Manager object based on Node Config
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.692345   30031 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPoli
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.692363   30031 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=tr
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.692392   30031 state_mem.go:36] "Initialized new in-memory state store"
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.692436   30031 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fledged C
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.692457   30031 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/do
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.692472   30031 client.go:97] "Start docker client with request timeout" timeout="2m0s"
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.693852   30031 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.693877   30031 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.693959   30031 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.697309   30031 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.697541   30031 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.697445   30031 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPlu
Aug 15 13:09:02 master-1 kubelet[30031]: I0815 13:09:02.705599   30031 docker_service.go:264] "Docker Info" dockerInfo=&{ID:FRX6:6KMG:I4JN:FL7C:UD2H:VLP4:7DOP
Aug 15 13:09:02 master-1 kubelet[30031]: E0815 13:09:02.706027   30031 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: ku
Aug 15 13:09:02 master-1 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug 15 13:09:02 master-1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

@mmumshad I followed the lecture exactly on setting up cluster using kubeadm and even in the exercise lab, i am getting the same error. Please help.

restart your vm and check is kubelet running or not. then reset your cluster with the “kubeadm reset” command. finaly create your cluster

Kubelet is not running upon restart either. See below:

vagrant@master-1:~$ sudo systemctl status kubelet
â—Ź kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Sun 2021-08-15 14:32:15 UTC; 8s ago
     Docs: https://kubernetes.io/docs/home/
  Process: 3103 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAIL
 Main PID: 3103 (code=exited, status=1/FAILURE)

Aug 15 14:32:15 master-1 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug 15 14:32:15 master-1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

I tried installing the older version of kubeadm: v1.20 and it worked fine.

Good day hmza26, I’ve run what seems to be the same problem with identical logs, (thanks for including how to see the logs)
I have it working now, I’ll paraphrase the steps and link to the site where I found the steps to overcome this.

For kubeadm v1.22 you need to change your docker cgroup driver to systemd.

you can see your cgroup driver by running
docker system info | grep -i driver
it is probably cgroupfs

you can change your cgroup driver by adding this file
/etc/docker/daemon.json
and giving it these contents

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

then you need to reload/restart some things
systemctl daemon-reload && systemctl restart docker

I found the details here:
https://sysnet4admin.gitbook.io/k8s/trouble-shooting/cluster-build/kubelet-is-not-properly-working-on-1.22-version

then I did
kubeadm reset
kubeadm init

with success
systemctl status kubelet
now indicates “active (running)”

I hope this helps

1 Like