First cluster following tutorial but have issues

Hi there,

I’m very new to K8 and have done the " Kubernetes for the Absolute Beginners - Hands-on Tutorial" course. After following the tutorial and setting up my first cluster - it is just constantly restarting and trying to decipher the best way to troubleshoot this.

Is there a best order to look at here for the logs? I’ve looked at some logs of some of the pods and nothing stands out to me so far and just wondered if there is a good process to follow when troubleshooting things like this?

Currently the situation is:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-674b8bbfcf-jhjm7 0/1 Pending 0 17h
kube-system coredns-674b8bbfcf-m5tjm 0/1 Pending 0 17h
kube-system etcd-centos-k8-master1 1/1 Running 218 (9m8s ago) 17h
kube-system kube-apiserver-centos-k8-master1 0/1 Running 213 (5m13s ago) 17h
kube-system kube-controller-manager-centos-k8-master1 0/1 Running 208 (5m ago) 17h
kube-system kube-proxy-8n7vw 0/1 CrashLoopBackOff 187 (93s ago) 17h
kube-system kube-scheduler-centos-k8-master1 0/1 CrashLoopBackOff 210 (3m9s ago) 17h

If I need to do a specific course/lab that covers this that would be great to get a recommendation.

I’m just also a little perplexed why following the hands on lab gives me these issues when it seemed to work fine for the instructor and I’m doing the same things? Only difference is I’m on CentOS but then using the relevant related commands for that OS.

Current setup:

  • Node I’ve tried to set up for cluster is a VM running CentOS in Proxmox and has full working network access to internet and can communicate with other nodes fine
  • Spec is 4GB RAM and 2 CPUs with 32GB drive space

If I need to paste any config let me know and thank you!

Hi @zimboguy

Have you installed the CNI plugin? Also, which steps did you follow to set up your Kubernetes cluster? It would be helpful if you could share a screenshot or paste the logs here inside a code block.

1 Like

Thanks for replying back Raymond. I’ve not install a CNI plugin as yet as thought this all has to be working nicely first before I can install it?

Most of the time I can’t even see the pods or this node because I keep getting:

The connection to the server 192.168.1.138:6443 was refused - did you specify the right host or port?

Manifest files:

cat etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.1.138:2379
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://192.168.1.138:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --experimental-initial-corrupt-check=true
    - --experimental-watch-progress-notify-interval=5s
    - --initial-advertise-peer-urls=https://192.168.1.138:2380
    - --initial-cluster=centos-k8-master1=https://192.168.1.138:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://192.168.1.138:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://192.168.1.138:2380
    - --name=centos-k8-master1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: registry.k8s.io/etcd:3.5.21-0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /livez
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: etcd
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 127.0.0.1
        path: /readyz
        port: 2381
        scheme: HTTP
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /readyz
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

cat kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.138:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.1.138
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: registry.k8s.io/kube-apiserver:v1.33.2
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.1.138
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 192.168.1.138
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 192.168.1.138
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki/ca-trust
      name: etc-pki-ca-trust
      readOnly: true
    - mountPath: /etc/pki/tls/certs
      name: etc-pki-tls-certs
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki/ca-trust
      type: DirectoryOrCreate
    name: etc-pki-ca-trust
  - hostPath:
      path: /etc/pki/tls/certs
      type: DirectoryOrCreate
    name: etc-pki-tls-certs
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
status: {}

cat kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    image: registry.k8s.io/kube-scheduler:v1.33.2
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /livez
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 127.0.0.1
        path: /readyz
        port: 10259
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /livez
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

cat kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true
    image: registry.k8s.io/kube-controller-manager:v1.33.2
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki/ca-trust
      name: etc-pki-ca-trust
      readOnly: true
    - mountPath: /etc/pki/tls/certs
      name: etc-pki-tls-certs
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki/ca-trust
      type: DirectoryOrCreate
    name: etc-pki-ca-trust
  - hostPath:
      path: /etc/pki/tls/certs
      type: DirectoryOrCreate
    name: etc-pki-tls-certs
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

Logs:
kube-system_etcd logs:

2025-07-02T10:19:59.564659042+01:00 stderr F {"level":"warn","ts":"2025-07-02T09:19:59.564565Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
2025-07-02T10:19:59.564755534+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.564693Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.1.138:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.1.138:2380","--initial-cluster=centos-k8-master1=https://192.168.1.138:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.1.138:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.1.138:2380","--name=centos-k8-master1","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
2025-07-02T10:19:59.564881287+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.564819Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}
2025-07-02T10:19:59.564886608+01:00 stderr F {"level":"warn","ts":"2025-07-02T09:19:59.564852Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
2025-07-02T10:19:59.564919265+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.564873Z","caller":"embed/etcd.go:140","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.1.138:2380"]}
2025-07-02T10:19:59.564975657+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.564944Z","caller":"embed/etcd.go:528","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
2025-07-02T10:19:59.565383447+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.565344Z","caller":"embed/etcd.go:148","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.138:2379"]}
2025-07-02T10:19:59.56556012+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.565497Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.5.21","git-sha":"a17edfd","go-version":"go1.23.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"centos-k8-master1","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.1.138:2380"],"listen-peer-urls":["https://192.168.1.138:2380"],"advertise-client-urls":["https://192.168.1.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
2025-07-02T10:19:59.568973045+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.568930Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"3.196686ms"}
2025-07-02T10:19:59.585320167+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.585161Z","caller":"etcdserver/server.go:534","msg":"No snapshot found. Recovering WAL from scratch!"}
2025-07-02T10:19:59.605970918+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.605718Z","caller":"etcdserver/raft.go:541","msg":"restarting local member","cluster-id":"3361fc64a6a48eaf","local-member-id":"ecc881f1ce49cbf8","commit-index":8110}
2025-07-02T10:19:59.607842705+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.607520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 switched to configuration voters=()"}
2025-07-02T10:19:59.60785276+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.607550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 became follower at term 200"}
2025-07-02T10:19:59.607856592+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.607560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ecc881f1ce49cbf8 [peers: [], term: 200, commit: 8110, applied: 0, lastindex: 8110, lastterm: 200]"}
2025-07-02T10:19:59.610154248+01:00 stderr F {"level":"warn","ts":"2025-07-02T09:19:59.610059Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
2025-07-02T10:19:59.617792807+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.617638Z","caller":"mvcc/kvstore.go:425","msg":"kvstore restored","current-rev":6498}
2025-07-02T10:19:59.617813133+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.617694Z","caller":"etcdserver/server.go:628","msg":"restore consistentIndex","index":8110}
2025-07-02T10:19:59.62029324+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.620188Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
2025-07-02T10:19:59.623158498+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.623021Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"ecc881f1ce49cbf8","timeout":"7s"}
2025-07-02T10:19:59.624793281+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.624655Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"ecc881f1ce49cbf8"}
2025-07-02T10:19:59.624804188+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.624683Z","caller":"etcdserver/server.go:875","msg":"starting etcd server","local-member-id":"ecc881f1ce49cbf8","local-server-version":"3.5.21","cluster-version":"to_be_decided"}
2025-07-02T10:19:59.624914753+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.624854Z","caller":"etcdserver/server.go:775","msg":"starting initial election tick advance","election-ticks":10}
2025-07-02T10:19:59.624977839+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.624920Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
2025-07-02T10:19:59.624982036+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.624944Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
2025-07-02T10:19:59.624984678+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.624951Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
2025-07-02T10:19:59.625129898+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.625069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 switched to configuration voters=(17062030063841168376)"}
2025-07-02T10:19:59.62513863+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.625106Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3361fc64a6a48eaf","local-member-id":"ecc881f1ce49cbf8","added-peer-id":"ecc881f1ce49cbf8","added-peer-peer-urls":["https://192.168.1.138:2380"],"added-peer-is-learner":false}
2025-07-02T10:19:59.625200544+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.625152Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"3361fc64a6a48eaf","local-member-id":"ecc881f1ce49cbf8","cluster-version":"3.5"}
2025-07-02T10:19:59.625204926+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.625171Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
2025-07-02T10:19:59.625645622+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.625576Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
2025-07-02T10:19:59.626774735+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.626718Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
2025-07-02T10:19:59.627060421+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.626998Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ecc881f1ce49cbf8","initial-advertise-peer-urls":["https://192.168.1.138:2380"],"listen-peer-urls":["https://192.168.1.138:2380"],"advertise-client-urls":["https://192.168.1.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
2025-07-02T10:19:59.627144921+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.627088Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
2025-07-02T10:19:59.627344952+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.627274Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.1.138:2380"}
2025-07-02T10:19:59.627392122+01:00 stderr F {"level":"info","ts":"2025-07-02T09:19:59.627358Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.1.138:2380"}
2025-07-02T10:20:00.908560639+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.908426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 is starting a new election at term 200"}
2025-07-02T10:20:00.908582496+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.908466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 became pre-candidate at term 200"}
2025-07-02T10:20:00.908585821+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.908494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 received MsgPreVoteResp from ecc881f1ce49cbf8 at term 200"}
2025-07-02T10:20:00.908690772+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.908509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 became candidate at term 201"}
2025-07-02T10:20:00.908700422+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.908534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 received MsgVoteResp from ecc881f1ce49cbf8 at term 201"}
2025-07-02T10:20:00.908703738+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.908545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ecc881f1ce49cbf8 became leader at term 201"}
2025-07-02T10:20:00.908707388+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.908552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ecc881f1ce49cbf8 elected leader ecc881f1ce49cbf8 at term 201"}
2025-07-02T10:20:00.914597694+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.914435Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"ecc881f1ce49cbf8","local-member-attributes":"{Name:centos-k8-master1 ClientURLs:[https://192.168.1.138:2379]}","request-path":"/0/members/ecc881f1ce49cbf8/attributes","cluster-id":"3361fc64a6a48eaf","publish-timeout":"7s"}
2025-07-02T10:20:00.914619511+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.914489Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
2025-07-02T10:20:00.914685984+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.914567Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
2025-07-02T10:20:00.914803062+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.914714Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
2025-07-02T10:20:00.914807837+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.914728Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
2025-07-02T10:20:00.915062984+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.915004Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
2025-07-02T10:20:00.915306897+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.915250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
2025-07-02T10:20:00.915488809+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.915418Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
2025-07-02T10:20:00.915982921+01:00 stderr F {"level":"info","ts":"2025-07-02T09:20:00.915931Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.1.138:2379"}
2025-07-02T10:21:25.480512498+01:00 stderr F {"level":"info","ts":"2025-07-02T09:21:25.480403Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
2025-07-02T10:21:25.480719081+01:00 stderr F {"level":"info","ts":"2025-07-02T09:21:25.480579Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"centos-k8-master1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.138:2380"],"advertise-client-urls":["https://192.168.1.138:2379"]}
2025-07-02T10:21:25.484002285+01:00 stderr F {"level":"info","ts":"2025-07-02T09:21:25.483580Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ecc881f1ce49cbf8","current-leader-member-id":"ecc881f1ce49cbf8"}
2025-07-02T10:21:25.48401415+01:00 stderr F {"level":"warn","ts":"2025-07-02T09:21:25.483646Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.1.138:2379: use of closed network connection"}
2025-07-02T10:21:25.484017169+01:00 stderr F {"level":"warn","ts":"2025-07-02T09:21:25.483658Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.1.138:2379: use of closed network connection"}
2025-07-02T10:21:25.48401956+01:00 stderr F {"level":"warn","ts":"2025-07-02T09:21:25.483681Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
2025-07-02T10:21:25.48402202+01:00 stderr F {"level":"warn","ts":"2025-07-02T09:21:25.483688Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
2025-07-02T10:21:25.486985137+01:00 stderr F {"level":"info","ts":"2025-07-02T09:21:25.486901Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.1.138:2380"}
2025-07-02T10:21:25.487077859+01:00 stderr F {"level":"info","ts":"2025-07-02T09:21:25.487008Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.1.138:2380"}
2025-07-02T10:21:25.487083241+01:00 stderr F {"level":"info","ts":"2025-07-02T09:21:25.487021Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"centos-k8-master1","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.138:2380"],"advertise-client-urls":["https://192.168.1.138:2379"]}

kube-apiserver logs:

2025-07-02T10:21:43.5966377+01:00 stderr F I0702 09:21:43.596516       1 options.go:249] external host was not specified, using 192.168.1.138
2025-07-02T10:21:43.598097835+01:00 stderr F I0702 09:21:43.598044       1 server.go:147] Version: v1.33.2
2025-07-02T10:21:43.598133468+01:00 stderr F I0702 09:21:43.598088       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
2025-07-02T10:21:43.726710027+01:00 stderr F W0702 09:21:43.726612       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:43.727235097+01:00 stderr F W0702 09:21:43.727058       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:43.727293034+01:00 stderr F I0702 09:21:43.727113       1 shared_informer.go:350] "Waiting for caches to sync" controller="node_authorizer"
2025-07-02T10:21:43.733234897+01:00 stderr F I0702 09:21:43.733160       1 shared_informer.go:350] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
2025-07-02T10:21:43.737396848+01:00 stderr F I0702 09:21:43.737348       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
2025-07-02T10:21:43.737450604+01:00 stderr F I0702 09:21:43.737421       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
2025-07-02T10:21:43.737637284+01:00 stderr F I0702 09:21:43.737601       1 instance.go:233] Using reconciler: lease
2025-07-02T10:21:43.738371333+01:00 stderr F W0702 09:21:43.738271       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:44.727424472+01:00 stderr F W0702 09:21:44.727300       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:44.727444954+01:00 stderr F W0702 09:21:44.727300       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:44.739457774+01:00 stderr F W0702 09:21:44.739309       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:46.240587093+01:00 stderr F W0702 09:21:46.240447       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:46.240635563+01:00 stderr F W0702 09:21:46.240447       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:46.599586037+01:00 stderr F W0702 09:21:46.599466       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:48.341824605+01:00 stderr F W0702 09:21:48.341695       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:48.364480247+01:00 stderr F W0702 09:21:48.364361       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:49.29472785+01:00 stderr F W0702 09:21:49.294618       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:52.059516682+01:00 stderr F W0702 09:21:52.059426       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:52.886169037+01:00 stderr F W0702 09:21:52.886032       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:53.849923404+01:00 stderr F W0702 09:21:53.849803       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:57.425092474+01:00 stderr F W0702 09:21:57.424930       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:21:58.183029451+01:00 stderr F W0702 09:21:58.182911       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:22:01.375268434+01:00 stderr F W0702 09:22:01.375155       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
2025-07-02T10:22:03.738338472+01:00 stderr F F0702 09:22:03.738176       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded

My guess is that this is a problem with the containerd CRI’s config.toml file, caused by a wrong cgroup setting. Please take a look at our kubeadm tutorial’s “node setup” page, item #5, which will show you a technique for getting around this.

1 Like

Thanks Rob,

I did have this set already but in the lab it states all you need is JUST:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

Ref: Container Runtimes | Kubernetes

The instructor says anything else might cause problems but what you referred me to has many other options in the config file after the command:

containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml

So some conflicting info here but so far I don’t know if this is the cause as I had that set correctly for the “SystemdCgroup = true” part?

I might just destroy this VM and start set up from scratch again although I have tried this a few times and the problem happens every time. I’m not sure if I’m missing something for CentOS here :thinking:

PS I have now done:

containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml

and rebooted and I’m still getting the error:
“The connection to the server 192.168.1.138:6443 was refused - did you specify the right host or port?”

Ok I have success! The only thing I did different was that command you referenced Rob:

containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml

And now it’s stable and all pods running!

NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   coredns-674b8bbfcf-j8fmb                    0/1     Pending   0          5m55s
kube-system   coredns-674b8bbfcf-jbh2w                    0/1     Pending   0          5m55s
kube-system   etcd-centos-k8-master1                      1/1     Running   0          6m1s
kube-system   kube-apiserver-centos-k8-master1            1/1     Running   0          6m1s
kube-system   kube-controller-manager-centos-k8-master1   1/1     Running   0          6m1s
kube-system   kube-proxy-cbbdj                            1/1     Running   0          5m55s
kube-system   kube-scheduler-centos-k8-master1            1/1     Running   0          6m1s

So yeah this is the only thing that is different and thanks for your help to you Rob and Raymond! Glad I got to the bottom of this and can continue learning kubernetes now!

1 Like