Upgrade of Control plane NODE ->question what about the POD's

After the upgrade of controlplane nodes i still see the POD’s are running on Node 01. This is expected behavior ? Am i correct ?

controlplane ~ ➜ kubectl describe node node01
Name: node01
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=node01
kubernetes.io/os=linux
Annotations: flannel.alpha.coreos.com/backend-data: {“VNI”:1,“VtepMAC”:“ee:72:b2:7f:a8:8e”}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.28.114.12
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 19 Oct 2024 20:22:30 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: node01
AcquireTime:
RenewTime: Sat, 19 Oct 2024 22:08:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


NetworkUnavailable False Sat, 19 Oct 2024 20:22:36 +0000 Sat, 19 Oct 2024 20:22:36 +0000 FlannelIsUp Flannel is running on this node
MemoryPressure False Sat, 19 Oct 2024 22:06:11 +0000 Sat, 19 Oct 2024 20:22:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 19 Oct 2024 22:06:11 +0000 Sat, 19 Oct 2024 20:22:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 19 Oct 2024 22:06:11 +0000 Sat, 19 Oct 2024 20:22:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 19 Oct 2024 22:06:11 +0000 Sat, 19 Oct 2024 20:22:34 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.28.114.12
Hostname: node01
Capacity:
cpu: 36
ephemeral-storage: 1016057248Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 214587048Ki
pods: 110
Allocatable:
cpu: 36
ephemeral-storage: 936398358207
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 214484648Ki
pods: 110
System Info:
Machine ID: 4640faee703a4be3a2aecf05da0e8976
System UUID: 1c118f50-8f55-130d-4ecb-44ecf3647ea2
Boot ID: a6eff6dd-909b-4d0b-8e6b-12444b134df1
Kernel Version: 5.4.0-1106-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.26
Kubelet Version: v1.30.0
Kube-Proxy Version: v1.30.0
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


default blue-fffb6db8d-227sn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m
default blue-fffb6db8d-4jnbx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m
default blue-fffb6db8d-4nqbk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m
default blue-fffb6db8d-tknm8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m
default blue-fffb6db8d-wllfh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m
kube-flannel kube-flannel-ds-tkb7t 100m (0%) 0 (0%) 50Mi (0%) 0 (0%) 105m
kube-system coredns-6f6b679f8f-rqlj2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2m30s
kube-system coredns-6f6b679f8f-v76s5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2m30s
kube-system kube-proxy-r9s7x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m27s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 300m (0%) 0 (0%)
memory 190Mi (0%) 340Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal Starting 2m22s kube-proxy
Normal RegisteredNode 3m9s node-controller Node node01 event: Registered Node node01 in Controller
Normal RegisteredNode 2m30s node-controller Node node01 event: Registered Node node01 in Controller
Normal RegisteredNode 94s node-controller Node node01 event: Registered Node node01 in Controller

There are 9 pods sceduled on Node 01 . Some of the PODS are the core kubernetes PODs’ that should be on the controlplane . Am i correct here ?

No, not correct – what happened here is indeed expected, and is what happens with the pods of deployments when you drain the pods from a node. Draining causes the pods to go down; since they are owned by a deployment, they are transferred to other nodes that are not cordoned. So that’s how things are supposed to work.