Timeout when trying to initialize a local cluster using kubeadm init

@mmumshad please hlep!

vagrant@kubemaster:~$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers

[init] Using Kubernetes version: v1.29.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0303 14:43:22.864317    7599 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

my laptops

macOS Ventura 13.6.4
16GB
2.3GHz 2Core

my steps

git clone https://github.com/kodekloudhub/certified-kubernetes-administrator-course
cd certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox
vagrant up

install containerd

sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install containerd.io

sudo cat > /etc/containerd/config.toml <<EOF
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
EOF
sudo systemctl restart containerd

Forwarding IPv4 and letting iptables see bridged traffic

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

Install kubeadm kubelet kubectl

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

create local cluster in master node

sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers

kubelet log

Mar 03 14:43:26 kubemaster systemd[1]: Started kubelet: The Kubernetes Node Agent.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit kubelet.service has finished successfully.
░░
░░ The job identifier is 3173.
Mar 03 14:43:26 kubemaster kubelet[7693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 03 14:43:26 kubemaster kubelet[7693]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Mar 03 14:43:26 kubemaster kubelet[7693]: I0303 14:43:26.881066    7693 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.365437    7693 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.365580    7693 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.365962    7693 server.go:919] "Client rotation is on, will bootstrap in background"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.369924    7693 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://192.168.56.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.370186    7693 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.377271    7693 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378159    7693 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378507    7693 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName">
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378597    7693 topology_manager.go:138] "Creating topology manager with none policy"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378611    7693 container_manager_linux.go:301] "Creating device plugin manager"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378693    7693 state_mem.go:36] "Initialized new in-memory state store"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378769    7693 kubelet.go:396] "Attempting to sync node with API server"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378782    7693 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378808    7693 kubelet.go:312] "Adding apiserver pod source"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.378820    7693 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Mar 03 14:43:27 kubemaster kubelet[7693]: W0303 14:43:27.381286    7693 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://192.168.56.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubemaster&limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.381632    7693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.56.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubemaster&limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.382146    7693 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.28" apiVersion="v1"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.382497    7693 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.383999    7693 server.go:1256] "Started kubelet"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.387085    7693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.390703    7693 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.391369    7693 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.56.11:6443/api/v1/namespaces/default/events\": dial tcp 192.168.56.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kubemaster.17b94847f0d12beb  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Na>
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.392032    7693 server.go:461] "Adding debug handlers to kubelet server"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.394153    7693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.394466    7693 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.395992    7693 volume_manager.go:291] "Starting Kubelet Volume Manager"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.395992    7693 volume_manager.go:291] "Starting Kubelet Volume Manager"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.396427    7693 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.396872    7693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.56.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubemaster?timeout=10s\": dial tcp 192.168.56.11:6443: connect: connection refused" interval="200ms"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.397633    7693 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.397844    7693 factory.go:221] Registration of the systemd container factory successfully
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.397963    7693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.398349    7693 reconciler_new.go:29] "Reconciler: start to sync state"
Mar 03 14:43:27 kubemaster kubelet[7693]: W0303 14:43:27.399779    7693 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.56.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.399869    7693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.56.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.400049    7693 factory.go:221] Registration of the containerd container factory successfully
Mar 03 14:43:27 kubemaster kubelet[7693]: W0303 14:43:27.400360    7693 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.56.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.400560    7693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.56.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.421719    7693 cpu_manager.go:214] "Starting CPU manager" policy="none"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.421782    7693 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.421820    7693 state_mem.go:36] "Initialized new in-memory state store"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.423751    7693 policy_none.go:49] "None policy: Start"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.425394    7693 memory_manager.go:170] "Starting memorymanager" policy="None"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.425454    7693 state_mem.go:35] "Initializing new in-memory state store"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.436054    7693 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.436373    7693 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.439334    7693 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubemaster\" not found"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.439921    7693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.443118    7693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.443204    7693 status_manager.go:217] "Starting to sync pod status with apiserver"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.443234    7693 kubelet.go:2329] "Starting kubelet main sync loop"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.443283    7693 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Mar 03 14:43:27 kubemaster kubelet[7693]: W0303 14:43:27.444188    7693 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://192.168.56.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.445228    7693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://192.168.56.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.498823    7693 kubelet_node_status.go:73] "Attempting to register node" node="kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.499848    7693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://192.168.56.11:6443/api/v1/nodes\": dial tcp 192.168.56.11:6443: connect: connection refused" node="kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.544621    7693 topology_manager.go:215] "Topology Admit Handler" podUID="3473f32dd7b41222d559b154fac715bc" podNamespace="kube-system" podName="etcd-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.547511    7693 topology_manager.go:215] "Topology Admit Handler" podUID="bab196b4accd342a89a5aeafe6fbd221" podNamespace="kube-system" podName="kube-apiserver-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.550250    7693 topology_manager.go:215] "Topology Admit Handler" podUID="57c71e32a1652dbd4218bb4ab2a29d0d" podNamespace="kube-system" podName="kube-controller-manager-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.553553    7693 topology_manager.go:215] "Topology Admit Handler" podUID="c69109bac5d8d5815ec8348ae982ac8b" podNamespace="kube-system" podName="kube-scheduler-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.598516    7693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.56.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubemaster?timeout=10s\": dial tcp 192.168.56.11:6443: connect: connection refused" interval="400ms"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.598747    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57c71e32a1652dbd4218bb4ab2a29d0d-flexvolume-dir\") pod \"kube-controller-manager-kubemaster\" (UID: \"57c71e32a1652dbd4218bb4ab2a29d0d\") " pod="kube-system/kube-controller-manager-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.599119    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57c71e32a1652dbd4218bb4ab2a29d0d-k8s-certs\") pod \"kube-controller-manager-kubemaster\" (UID: \"57c71e32a1652dbd4218bb4ab2a29d0d\") " pod="kube-system/kube-controller-manager-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.599309    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57c71e32a1652dbd4218bb4ab2a29d0d-usr-share-ca-certificates\") pod \"kube-controller-manager-kubemaster\" (UID: \"57c71e32a1652dbd4218bb4ab2a29d0d\") " pod="kube-system/kube-controller-manager-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.599463    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bab196b4accd342a89a5aeafe6fbd221-etc-ca-certificates\") pod \"kube-apiserver-kubemaster\" (UID: \"bab196b4accd342a89a5aeafe6fbd221\") " pod="kube-system/kube-apiserver-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.599629    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bab196b4accd342a89a5aeafe6fbd221-usr-share-ca-certificates\") pod \"kube-apiserver-kubemaster\" (UID: \"bab196b4accd342a89a5aeafe6fbd221\") " pod="kube-system/kube-apiserver-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.599888    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57c71e32a1652dbd4218bb4ab2a29d0d-ca-certs\") pod \"kube-controller-manager-kubemaster\" (UID: \"57c71e32a1652dbd4218bb4ab2a29d0d\") " pod="kube-system/kube-controller-manager-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.600110    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57c71e32a1652dbd4218bb4ab2a29d0d-etc-ca-certificates\") pod \"kube-controller-manager-kubemaster\" (UID: \"57c71e32a1652dbd4218bb4ab2a29d0d\") " pod="kube-system/kube-controller-manager-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.600382    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57c71e32a1652dbd4218bb4ab2a29d0d-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubemaster\" (UID: \"57c71e32a1652dbd4218bb4ab2a29d0d\") " pod="kube-system/kube-controller-manager-kubem>
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.600679    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c69109bac5d8d5815ec8348ae982ac8b-kubeconfig\") pod \"kube-scheduler-kubemaster\" (UID: \"c69109bac5d8d5815ec8348ae982ac8b\") " pod="kube-system/kube-scheduler-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.601220    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3473f32dd7b41222d559b154fac715bc-etcd-certs\") pod \"etcd-kubemaster\" (UID: \"3473f32dd7b41222d559b154fac715bc\") " pod="kube-system/etcd-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.601504    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bab196b4accd342a89a5aeafe6fbd221-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubemaster\" (UID: \"bab196b4accd342a89a5aeafe6fbd221\") " pod="kube-system/kube-apiserver-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.601724    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bab196b4accd342a89a5aeafe6fbd221-k8s-certs\") pod \"kube-apiserver-kubemaster\" (UID: \"bab196b4accd342a89a5aeafe6fbd221\") " pod="kube-system/kube-apiserver-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.602003    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57c71e32a1652dbd4218bb4ab2a29d0d-kubeconfig\") pod \"kube-controller-manager-kubemaster\" (UID: \"57c71e32a1652dbd4218bb4ab2a29d0d\") " pod="kube-system/kube-controller-manager-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.602240    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3473f32dd7b41222d559b154fac715bc-etcd-data\") pod \"etcd-kubemaster\" (UID: \"3473f32dd7b41222d559b154fac715bc\") " pod="kube-system/etcd-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.602444    7693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bab196b4accd342a89a5aeafe6fbd221-ca-certs\") pod \"kube-apiserver-kubemaster\" (UID: \"bab196b4accd342a89a5aeafe6fbd221\") " pod="kube-system/kube-apiserver-kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: I0303 14:43:27.703177    7693 kubelet_node_status.go:73] "Attempting to register node" node="kubemaster"
Mar 03 14:43:27 kubemaster kubelet[7693]: E0303 14:43:27.704210    7693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://192.168.56.11:6443/api/v1/nodes\": dial tcp 192.168.56.11:6443: connect: connection refused" node="kubemaster"
Mar 03 14:43:28 kubemaster kubelet[7693]: E0303 14:43:28.000901    7693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.56.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubemaster?timeout=10s\": dial tcp 192.168.56.11:6443: connect: connection refused" interval="800ms"
Mar 03 14:43:28 kubemaster kubelet[7693]: I0303 14:43:28.106496    7693 kubelet_node_status.go:73] "Attempting to register node" node="kubemaster"
Mar 03 14:43:28 kubemaster kubelet[7693]: E0303 14:43:28.106868    7693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://192.168.56.11:6443/api/v1/nodes\": dial tcp 192.168.56.11:6443: connect: connection refused" node="kubemaster"
......
Mar 03 14:46:00 kubemaster kubelet[7693]: I0303 14:46:00.233424    7693 kubelet_node_status.go:73] "Attempting to register node" node="kubemaster"
Mar 03 14:46:00 kubemaster kubelet[7693]: E0303 14:46:00.235233    7693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://192.168.56.11:6443/api/v1/nodes\": dial tcp 192.168.56.11:6443: connect: connection refused" node="kubemaster"
Mar 03 14:46:05 kubemaster kubelet[7693]: E0303 14:46:05.041703    7693 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.56.11:6443/api/v1/namespaces/default/events\": dial tcp 192.168.56.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kubemaster.17b94847f18f9c03  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Na>
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.069454    7693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.56.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubemaster?timeout=10s\": dial tcp 192.168.56.11:6443: connect: connection refused" interval="7s"
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.086474    7693 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://192.168.56.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:46:07 kubemaster kubelet[7693]: I0303 14:46:07.237805    7693 kubelet_node_status.go:73] "Attempting to register node" node="kubemaster"
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.238812    7693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://192.168.56.11:6443/api/v1/nodes\": dial tcp 192.168.56.11:6443: connect: connection refused" node="kubemaster"
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.549045    7693 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubemaster\" not found"
Mar 03 14:46:07 kubemaster kubelet[7693]: W0303 14:46:07.579311    7693 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.56.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.579744    7693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.56.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.56.11:6443: connect: connection refused
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.631630    7693 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head>
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.631741    7693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"ht>
Mar 03 14:46:07 kubemaster kubelet[7693]: E0303 14:46:07.631773    7693 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"h>

Two things I notice here

  1. Is there any reason why you are using an alternative image registry in kubeadm init? Is this because you are in China and do not have access to standard repos? None of our staff are in China as as such cannot provide advice on what the best setup is there.
  2. If your laptop has only 2 cores, it may be underpowered for this lab. Virtualbox requests 6 virtual cores (2 each per VM), and it will allocate them, but with only 2 physical cores the virtual cores will be extremely throttled and run very slowly. Could be the cause of timeout error.

If the API server has not started, then kubelet will have the errors you have provided.

1 Like

Thank you for your help!

  1. The reason for using external images is to pull k8s images in China

  2. If there are no issues with the above steps, it may be due to this reason. I will try to test it on other laptops later

I use a new laptops, but still not effect

OS: windows10
CPU: intel i5-10400
MEM: 16G

And i set NUM_WORKS to 0

➜  virtualbox cat Vagrantfile
# -*- mode: ruby -*-
# vi:set ft=ruby sw=2 ts=2 sts=2:

# Define the number of master and worker nodes
# If this number is changed, remember to update setup-hosts.sh script with the new hosts IP details in /etc/hosts of each VM.
NUM_WORKER_NODE = 0

The kubelet has the same errors. And kubeadm init timeout.

We will be merging an update to this repo within the next week
It is well tested.
We cannot vouch for the suitability of images not downloaded from standard repos.