CKA mock exam ultimate series: Mock exam 3: question 111

Hi Team,

Kindly assist me with this question. Thanks !

Q11:

We want to deploy a Python-based application on the cluster using a template located at /root/olive-app-cka10-str.yaml on cluster1-controlplane. However, before you proceed, we need to make some modifications to the YAML file as per the following details:

The YAML should also contain a persistent volume claim with name olive-pvc-cka10-str to claim a 100Mi of storage from olive-pv-cka10-str PV.

Update the deployment to add a sidecar container named busybox, which can use busybox image (you might need to add a sleep command for this container to keep it running.)

Share the python-data volume with this container and mount the same at path /usr/src. Make sure this container only has read permissions on this volume.

Finally, create a pod using this YAML and make sure the POD is in Running state.

Note: Additionally, you can expose a NodePort service for the application. The service should be named olive-svc-cka10-str and expose port 5000 with a nodePort value of 32006.
However, inclusion/exclusion of this service won't affect the validation for this task.

My answer:

cluster1-controlplane ~ âžś  k get pv
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                         STORAGECLASS          VOLUMEATTRIBUTESCLASS   REASON   AGE
demo-pv-cka29-trb    100Mi      RWX            Retain           Available                                                       <unset>                          17m
olive-pv-cka10-str   150Mi      RWX            Retain           Bound       default/olive-pvc-cka10-str   olive-stc-cka10-str   <unset>                          17m
peach-pv-cka05-str   150Mi      RWO            Retain           Available                                                       <unset>                          18m

cluster1-controlplane ~ âžś  k get pvc
NAME                  STATUS    VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS          VOLUMEATTRIBUTESCLASS   AGE
demo-pvc-cka29-trb    Pending   demo-pv-cka29-trb    0                                               <unset>                 17m
olive-pvc-cka10-str   Bound     olive-pv-cka10-str   150Mi      RWX            olive-stc-cka10-str   <unset>                 6m15s
  • Yaml of PVC
cluster1-controlplane ~ âžś  k get pvc olive-pvc-cka10-str -o yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"olive-pvc-cka10-str","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"100Mi"}},"storageClassName":"olive-stc-cka10-str","volumeMode":"Filesystem","volumeName":"olive-pv-cka10-str"}}
    pv.kubernetes.io/bind-completed: "yes"
  creationTimestamp: "2025-05-06T06:50:16Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: olive-pvc-cka10-str
  namespace: default
  resourceVersion: "5916"
  uid: 9d0e3b63-1225-403f-8fd8-fbe98e9c1ccf
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
  storageClassName: olive-stc-cka10-str
  volumeMode: Filesystem
  volumeName: olive-pv-cka10-str
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 150Mi
  phase: Bound
  • Yaml of the pod
cluster1-controlplane ~ âžś  k get  pod olive-app-cka10-str-68d657fd7c-88wsc -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2025-05-06T07:07:24Z"
  generateName: olive-app-cka10-str-68d657fd7c-
  labels:
    app: olive-app-cka10-str
    pod-template-hash: 68d657fd7c
  name: olive-app-cka10-str-68d657fd7c-88wsc
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: olive-app-cka10-str-68d657fd7c
    uid: 67dbfff5-352d-481c-a65e-f14174d21e70
  resourceVersion: "7517"
  uid: d8ada70d-1a73-46aa-84d9-67e1103f8ae3
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - cluster1-controlplane
  containers:
  - image: poroko/flask-demo-app
    imagePullPolicy: Always
    name: python
    ports:
    - containerPort: 5000
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /usr/share/
      name: python-data
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5rml6
      readOnly: true
  - command:
    - sleep
    - "3600"
    image: busybox
    imagePullPolicy: Always
    name: busybox
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /usr/src
      name: python-data
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5rml6
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: python-data
    persistentVolumeClaim:
      claimName: olive-pvc-cka10-str
  - name: kube-api-access-5rml6
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2025-05-06T07:07:24Z"
    message: '0/3 nodes are available: 1 node(s) had volume node affinity conflict,
      2 node(s) didn''t match Pod''s node affinity/selector. preemption: 0/3 nodes
      are available: 3 Preemption is not helpful for scheduling.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: BestEffort
  • Checked node label
cluster1-controlplane ~ âžś  k get nodes -l kubernetes.io/hostname=cluster1-controlplane
NAME                    STATUS   ROLES           AGE   VERSION
cluster1-controlplane   Ready    control-plane   70m   v1.32.0

-Describe on the pod

Name:             olive-app-cka10-str-68d657fd7c-88wsc
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=olive-app-cka10-str
                  pod-template-hash=68d657fd7c
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/olive-app-cka10-str-68d657fd7c
Containers:
  python:
    Image:        poroko/flask-demo-app
    Port:         5000/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /usr/share/ from python-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rml6 (ro)
  busybox:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sleep
      3600
    Environment:  <none>
    Mounts:
      /usr/src from python-data (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rml6 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  python-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  olive-pvc-cka10-str
    ReadOnly:   false
  kube-api-access-5rml6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  3m29s  default-scheduler  0/3 nodes are available: 1 node(s) had volume node affinity conflict, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

I have removed NoSchedule taint from cluster1-controlplane as well.

cluster1-controlplane ~ âś– k describe node cluster1-controlplane | grep -i -A1 "taints"
Taints:             <none>
Unschedulable:      false

I am not sure why it’s not able to schedule pod on node cluster1-controlplane that has label kubernetes.io/hostname=cluster1-controlplane.

Hi @sakshibag80

This is certainly an interesting one. Would you still happen to have access to the terminal (probably not). I was just wondering if you could post the PV yaml (only cos I see better in yaml and it seems like the issue is from the PV – which is super strange).

I attempted the lab just before you and I didn’t have any volume-node issues.

I actually feel like your pod’s nodeAffinity rules are working like they should (which is why you get the message: ' 2 node(s) didn''t match Pod''s node affinity/selector. ' but then there may be something stopping the scheduling from a PV perspective (which is the first part of your posted error message from the pod scheduling). Did you edit the PV at all (again, probably not). Interesting case though.

HI @sakshibag80

I think you are confused when the question says: We want to deploy a Python-based application on the cluster using a template located at /root/olive-app-cka10-str.yaml on cluster1-controlplane .

It implies the template is located in cluster1-controlplane, and not to schedule the Pod on the cluster1-controlplane. The template already has a nodeAffinity for node01.
You just need to add a sidecar container with volumeMounts and the matching PVC.

Optionally you can create a nodePort Service.

1 Like

Ha! there! That was it. the template was changed to include nodeaffinity for cluster1-controlplane. lol. Great catch!

I didn’t misunderstand the problem statement but I COMPLETELY get how I could easily have misread that. In my opinion, I think the question should just read as "... deploy a python-based app on the cluster using the template located at /root/olive-app-cka10-str.yaml." (no need for the additional “on cluster1-controlplane”).

I know the CKA exam isn’t set up for “gotcha scenarios” but this still makes me freak out a little bit when I think about the possibility of misunderstanding the problem statement cos of how it is worded. :slight_smile:

What should you do if the template is located on the student node, but you need to deploy it on cluster1?
It’s important to note that we are working with two different nodes: the student node and the cluster control plane. Clarifying where the template is located helps avoid confusion and prevents questions like, “Where is the template located?”

And, for this I know the CKA exam isn’t set up for “gotcha scenarios”. I would say, you should get prepared for some for sure.
:slight_smile:

1 Like

Hi @olakunlecrown28 ,

Thanks for getting back.

I didn’t edit the PV in the question.

Regards,
Sakshi

Hi @Santosh_KodeKloud ,

You are right . I got confused with question and thought that we need to schedule this on cluster1-controlplane and changed the node01 in the yaml.

I will try this again without changing node01 in the YAML and see how I go with this.

Regards,
Sakshi

Hi @Santosh_KodeKloud ,

I tried this question again by scheduling on node01, and everything is working as expected. I think the issue was due to scheduling the pod on cluster1-controlplane.

One more question:

When PV is bound to storageclass, then is PVC do we need to define both VolumeName and storageClassName in PVC yaml ? Kindly assist. Thanks!

Regards,
Sakshi

You can read more about it here on defining stirageClass and on defining volumeName.