Mayur Shivram Kadam:
@Mumshad Mannambeth @Vijin Palazhi @Tej_Singh_Rana @Rahul Soni Team
I’m facing an issue during the practice labs for upgrading the K8 Worker node. After upgrading the Master node and uncordoning the Master node, i try to drain the worker node , but the pods are not evicted from worker node even if i use the option --ignore-daemonsets. There are no taints on master as well.What could be the issue. PFA out from labs
root@controlplane:~# kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
error: unable to drain node "node01", aborting command...
There are pending nodes to be drained:
node01
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/simple-webapp-1
root@controlplane:~#
root@controlplane:~#
root@controlplane:~# k get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane,master 78m v1.20.0
node01 Ready,SchedulingDisabled <none> 77m v1.19.0
root@controlplane:~# k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
blue-746c87566d-62vzd 1/1 Running 0 52m 10.244.1.5 node01 <none> <none>
blue-746c87566d-htqsp 1/1 Running 0 36m 10.244.1.10 node01 <none> <none>
blue-746c87566d-plz64 1/1 Running 0 36m 10.244.1.12 node01 <none> <none>
blue-746c87566d-tgm8j 1/1 Running 0 52m 10.244.1.3 node01 <none> <none>
blue-746c87566d-vbv7m 1/1 Running 0 52m 10.244.1.4 node01 <none> <none>
simple-webapp-1 1/1 Running 0 52m 10.244.1.2 node01 <none> <none>
root@controlplane:~#
root@controlplane:~# k describe nodes controlplane | grep -i taints
Taints: <none>
root@controlplane:~#
root@controlplane:~#