Question in killer.sh

Q. Create a single Pod of image httpd:2.4.41-alpine in Namespace default . The Pod should be named pod1 and the container should be named pod1-container . This Pod should only be scheduled on a controlplane node , do not add new labels any nodes.

Troubleshooting: i added toleration for controlplane node and also nodeSelector. But the pod is stuck in pending state. Below is my yaml file:

apiVersion: v1
kind: Pod
metadata:
  name: pod1
  namespace: default
spec:
  containers:
  - name: pod1-container
    image: httpd:2.4.41-alpine
  nodeSelector:
    kubernetes.io/hostname: cluster1-controlplane
  tolerations:
  - key: "node-role.kubernetes.io/control-plane"
    operator: "Exists"
    effect: "NoSchedule"

is this correct or i need to make any modification?

Hi @uzmashafi061,
Please share the output of the kubectl describe so we can provide you the better insight on this.
Btw did you try with nodeName field?

Regards,

yes, i am able to run with nodeName. But my doubt is why not with nodeSelector.

In k describe command, its showing taint and toleration error.

1 Like

I used your pod manifest with nodeSelector and changed value of kubernetes.io/hostname to match my cluster’s control-plane hostname. Pod is able to run.

[thor@jump-host ~]$ k get no -owide
NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   56m   v1.28.0   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   6.2.0-31-generic   containerd://1.7.1
kind-worker          Ready    <none>          55m   v1.28.0   172.18.0.4    <none>        Debian GNU/Linux 11 (bullseye)   6.2.0-31-generic   containerd://1.7.1
kind-worker2         Ready    <none>          55m   v1.28.0   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   6.2.0-31-generic   containerd://1.7.1
[thor@jump-host ~]$ cat <<EOF > pod.yaml
> apiVersion: v1
> kind: Pod
> metadata:
>   name: pod1
>   namespace: default
> spec:
>   containers:
>   - name: pod1-container
>     image: httpd:2.4.41-alpine
>   nodeSelector:
>     kubernetes.io/hostname: cluster1-controlplane
>   tolerations:
>   - key: "node-role.kubernetes.io/control-plane"
>     operator: "Exists"
>     effect: "NoSchedule"
> EOF
[thor@jump-host ~]$ yq '.spec.nodeSelector."kubernetes.io/hostname" |= "kind-control-plane"' pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  namespace: default
spec:
  containers:
    - name: pod1-container
      image: httpd:2.4.41-alpine
  nodeSelector:
    kubernetes.io/hostname: kind-control-plane
  tolerations:
    - key: "node-role.kubernetes.io/control-plane"
      operator: "Exists"
      effect: "NoSchedule"
[thor@jump-host ~]$ yq '.spec.nodeSelector."kubernetes.io/hostname" |= "kind-control-plane"' pod.yaml | k apply -f -
pod/pod1 configured
[thor@jump-host ~]$ k get po -owide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          7s    192.168.82.2   kind-control-plane   <none>           <none>

[thor@jump-host ~]$

No one can give you better insight base on just the sentence In k describe command, its showing taint and toleration error.

You need to help us so we can help you.

Below is the error i am getting

25s Warning FailedScheduling pod/pod1 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) didn’t match Pod’s node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

Taints on cluster1-controlplane node:
Taints: node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/master:NoSchedule

Your better insight is already in the messages you posted.

25s Warning FailedScheduling pod/pod1 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) didn’t match Pod’s node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

Taints: node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/master:NoSchedule

Your manifest only tolerate node-role.kubernetes.io/control-plane:NoSchedule but not node-role.kubernetes.io/master:NoSchedule

Why is the cluster being set up this way? Please ask Kim WΓΌstkamp

As of v1.20 node-role.kubernetes.io/control-plane was introduced, and node-role.kubernetes.io/master is deemed to be deprecated until removal in future.

I have 2 clusters in my demo below,

kind-kke-master cluster has taints node-role.kubernetes.io/master:NoSchedule AND node-role.kubernetes.io/control-plane:NoSchedule on kke-master-control-plane

kind-kke-control-plane cluster only has taint node-role.kubernetes.io/control-plane:NoSchedule on kke-control-plane-control-plane

Hope this is worth a million words of explanation.

sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» for context in kke-master kke-control-plane ; do k --context kind-"$context" get no ; done
NAME                       STATUS   ROLES           AGE     VERSION
kke-master-control-plane   Ready    control-plane   5m42s   v1.28.0
kke-master-worker          Ready    <none>          5m16s   v1.28.0
kke-master-worker2         Ready    <none>          5m17s   v1.28.0
NAME                              STATUS   ROLES           AGE     VERSION
kke-control-plane-control-plane   Ready    control-plane   4m13s   v1.28.0
kke-control-plane-worker          Ready    <none>          3m54s   v1.28.0
kke-control-plane-worker2         Ready    <none>          3m53s   v1.28.0

sauron @ mordor in on β›΅ kind-kke-master () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» for context in kke-master kke-control-plane ; do k --context kind-"$context" get no "$context"-control-plane -oyaml | yq '.spec.taints' | less; done
───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       β”‚ STDIN
───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1   β”‚ - effect: NoSchedule
   2   β”‚   key: node-role.kubernetes.io/master
   3   β”‚ - effect: NoSchedule
   4   β”‚   key: node-role.kubernetes.io/control-plane
───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       β”‚ STDIN
───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1   β”‚ - effect: NoSchedule
   2   β”‚   key: node-role.kubernetes.io/control-plane
───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
cat <<EOF > /tmp/pod1.yaml # your original manifest
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  namespace: default
spec:
  containers:
  - name: pod1-container
    image: httpd:2.4.41-alpine
  nodeSelector:
    kubernetes.io/hostname: cluster1-controlplane
  tolerations:
  - key: "node-role.kubernetes.io/control-plane"
    operator: "Exists"
    effect: "NoSchedule"
EOF

sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» for context in kke-master kke-control-plane ; do context="${context}-control-plane" yq '.spec.nodeSelector."kubernetes.io/hostname" |= env(context)' /tmp/pod1.yaml| k --context "kind-$context" apply -f - ; done

sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» for context in kke-master kke-control-plane ; do echo "\nContext: $context" ;k --context kind-"$context" get po ; echo ;done

Context: kke-master
NAME   READY   STATUS    RESTARTS   AGE
pod1   0/1     Pending   0          11s


Context: kke-control-plane
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          9s

sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» k --context kind-kke-master describe po pod1 | grep -A4 -i events
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  10s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» yq '.spec.nodeSelector."kubernetes.io/hostname" |= "kke-master-control-plane" | with(.spec.tolerations; select(.[1] += {"key":"node-role.kubernetes.io/master", "operator": "Exists","effect": "NoSchedule"}| .. style="double"))' /tmp/pod1.yaml | tee /tmp/pod1-updated.yaml | k --context kind-kke-master replace -f - --force
pod "pod1" deleted
pod/pod1 replaced

sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» for context in kke-master kke-control-plane ; do echo "\nContext: $context" ;k --context kind-"$context" get po ; echo ;done

Context: kke-master
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          18s


Context: kke-control-plane
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          19m


sauron @ mordor in on β›΅ kind-kke-control-plane () kubernetes-env/kind on 🌱 dev via 🐍 pyenv 3.11.5
Ξ» k --context kind-kke-master get po pod1 -oyaml | yq '.spec.tolerations[]'
effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300