@Mumshad Mannambeth @Vijin Palazhi @Tej_Singh_Rana @Rahul Soni Team I'm facing a . . .

Mayur Shivram Kadam:
@Mumshad Mannambeth @Vijin Palazhi @Tej_Singh_Rana @Rahul Soni Team
I’m facing an issue during the practice labs for upgrading the K8 Worker node. After upgrading the Master node and uncordoning the Master node, i try to drain the worker node , but the pods are not evicted from worker node even if i use the option --ignore-daemonsets. There are no taints on master as well.What could be the issue. PFA out from labs

root@controlplane:~# kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
error: unable to drain node "node01", aborting command...

There are pending nodes to be drained:
 node01
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/simple-webapp-1
root@controlplane:~# 
root@controlplane:~# 
root@controlplane:~# k get nodes
NAME           STATUS                     ROLES                  AGE   VERSION
controlplane   Ready                      control-plane,master   78m   v1.20.0
node01         Ready,SchedulingDisabled   <none>                 77m   v1.19.0

root@controlplane:~# k get po -o wide                  
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
blue-746c87566d-62vzd   1/1     Running   0          52m   10.244.1.5    node01   <none>           <none>
blue-746c87566d-htqsp   1/1     Running   0          36m   10.244.1.10   node01   <none>           <none>
blue-746c87566d-plz64   1/1     Running   0          36m   10.244.1.12   node01   <none>           <none>
blue-746c87566d-tgm8j   1/1     Running   0          52m   10.244.1.3    node01   <none>           <none>
blue-746c87566d-vbv7m   1/1     Running   0          52m   10.244.1.4    node01   <none>           <none>
simple-webapp-1         1/1     Running   0          52m   10.244.1.2    node01   <none>           <none>
root@controlplane:~# 

root@controlplane:~# k describe nodes controlplane | grep -i taints
Taints:             <none>
root@controlplane:~# 
root@controlplane:~#

Tej_Singh_Rana:
Add --force flag also. I can see there is a pod which is not operated by RC, RS, DS or StatefulSet.

Basavraj Nilkanthe:
@Mayur Shivram Kadam if you look at error message closely and I think you can get idea what you should be doing in such case.

error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/simple-webapp-1

As tejas mentioned, you need to add --force flag to kill independent pods(not managed by deployment,rs,ds,sts)

Mayur Shivram Kadam:
@Tej_Singh_Rana Thanks for the pointer. I got confused because the previous question said no user should be affected, meaning , no pods should be killed, but it appears only pods with ‘blue’ deployment are of main concern here.