Kubernetes questions on cordon and uncordon

1)kubectl drain node01 --ignore-daemonsets -->what are these daemonsets?
2)Lets suppose we have a node which has a pod which is not created as part of replica-set . That means if we cordon that node forcefully we are going to loose that pod and container . Am i correct here ?
3)Lets suppose we have a node which has a pod which is created as part of replica-set . That means if we cordon that node forcefully we are going to create that pod/container on any other node available as it is part of replica-set .Am i correct here ?

  1. Frequently, worker nodes as well as controlplane nodes have pods owned by daemonsets running on them, to accomplish a variety of tasks. The --ignore-daemonsets flag allows these pods to continue running during the install procedure.
  2. Yes, a stand-alone pod will be taken down and lost by drain. It assumes that if you are running pods w/o a deployment or similar object to own them, that you know what you’re doing and have backed up the pod yourself. WELL? DID YOU?
  3. Should work; the RS will recreate the pod on a different node in this case, I’m reasonably sure.
1 Like

A daemonset (DS) is a special kind a deployment that guarantees to run one pod per node, therefore the number of replicas of a daemonset equals number of nodes in the cluster.

This type of workload is used where the pods have to do something node specific. kube-proxy is a daemonset since its job is to set up iptables rules on each node so that cluster networking is possible. Log aggregators e.g. for Elastic Search are another example.

The reason why we ignore-daemonsets is because as soon as cordon/drain kills a daemonset pod, the daemonset controller will launch another pod in its place, therefore we just ignore them as we don’t care if the DS workload is running or not when we are draining the node.

Note the difference between cordon and drain

  • Cordon tells the scheduler to not create any new pods on the node. Pods that are there continue to run.
  • Drain first cordons the node, then evicts all pods (except those you have ignored) by effectively doing kubectl delete pod. Any pods that are part of a deployment/replicaset/statefulset with return to scheduling and get recreated on another node that is not cordoned. Pods that were created standalone (e.g. kubectl run) are lost permanently.
1 Like