*Query related to drain of nodes*: In the last question of the Practice Test- OS . . .

Mudit:
Query related to drain of nodes:
In the last question of the Practice Test- OS upgrade, although we had to run “k cordon node01” but I purposefully ran “k drain node01”.
And the output of the drain command showed hr-pod evicted successfully. But when checked, the hr-pod was still running on node01, although the node01 was drained successfully and marked un-schedulable.
Apart from this, there was no Taint on controlplane. Then why did not the hr-pod move to controlplane and kept running on node01?

Ravi Shanker:
Was it part of replicaset ? Did it not ask you to use --force option ?

Mudit:
Hi @Ravi Shankar, this hr-app pod is part of replica set. Here is the output:
Drain command executed successfully:
root@controlplane:~# k drain node01 --ignore-daemonsets
node/node01 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-hlxns, kube-system/kube-proxy-v4t89
evicting pod default/hr-app-76d475c57d-4pnq8
pod/hr-app-76d475c57d-4pnq8 evicted
node/node01 evicted
root@controlplane:~#

But still hr-app-76d475c57d-nw7xp pod is still on node01:
root@controlplane:~# k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
blue-746c87566d-7s6xq 1/1 Running 0 11m 10.244.0.4 controlplane <none> <none>
blue-746c87566d-rsfh5 1/1 Running 0 11m 10.244.0.5 controlplane <none> <none>
blue-746c87566d-wf7zf 1/1 Running 0 11m 10.244.0.6 controlplane <none> <none>
hr-app-76d475c57d-nw7xp 1/1 Running 0 51s 10.244.1.7 node01 <none> <none>
root@controlplane:~#

root@controlplane:~# k get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane,master 35m v1.20.0
node01 Ready,SchedulingDisabled <none> 33m v1.20.0
root@controlplane:~#