Guys, I drained worker node but removed taint from controlplane only afterwards, . . .

Alex Tsmokalyuk:
Guys, I drained worker node but removed taint from controlplane only afterwards, not the pod is Pending:

Alex Tsmokalyuk:
Warning FailedScheduling 12m default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane: }, 1 node(s) had untolerated taint {http://node.kubernetes.io/unschedulable|node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

Alex Tsmokalyuk:
how to reschedule it?

Trung Tran:
What you get if you type kubectl get nodes

Trung Tran:
And can you describe the master node to confirm the taint removed?

Alex Tsmokalyuk:
yes, I confirm it did

mjv:
did you have some pods (not part of apiGroup apps like *sets) on worker node?

mjv:
if you had regular pod (created as k run NAME --image=IMAGE) they were evicted (deleted) from the cluster
on the other hand,pods from *sets (apiGroup apps) will be in Pending state until you remove taint on controlplane node and then they will be scheduled to the controlplane node

Alex Tsmokalyuk:
ah, the lab has terminated

Alex Tsmokalyuk:
@mjv so they are lost forever if worker node draining took place before removing the taint from CP?

Alex Tsmokalyuk:
those created by k run --image

kubosub:
only pods managed by deployments/replicasets will be recreated on another node when the node is drained. stand-alone pods will be deleted

Alex Tsmokalyuk:
thank you!

Alex Tsmokalyuk:
need to check if it is shown in description of the pod how it was created: manually of by deploy

mjv:
you can check metadata.ownerReferences for the pods to see who is the parent (if there is some)

Alex Tsmokalyuk:
k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
gold-nginx 1/1 1 1 66s

Alex Tsmokalyuk:
k get pods
NAME READY STATUS RESTARTS AGE
gold-nginx-7cf65dbf6d-xchqw 1/1 Running 0 98s

Alex Tsmokalyuk:
controlplane ~ ➜ k get pods
NAME READY STATUS RESTARTS AGE
gold-nginx-7cf65dbf6d-xchqw 1/1 Running 0 98s

controlplane ~ ➜ k delete pod gold-nginx-7cf65dbf6d-xchqw --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod “gold-nginx-7cf65dbf6d-xchqw” force deleted

controlplane ~ ➜ k get pods
NAME READY STATUS RESTARTS AGE
gold-nginx-7cf65dbf6d-xdrp6 1/1 Running 0 6s

Alex Tsmokalyuk:
its deployment one, but it hangs and can;t reschedule itself

Alex Tsmokalyuk:
found it: looks like I forgot to restart kubelet on controlplane after updating it