Hi Team,
Kindly assist me with this question. Thanks !
Q11:
We want to deploy a Python-based application on the cluster using a template located at /root/olive-app-cka10-str.yaml on cluster1-controlplane. However, before you proceed, we need to make some modifications to the YAML file as per the following details:
The YAML should also contain a persistent volume claim with name olive-pvc-cka10-str to claim a 100Mi of storage from olive-pv-cka10-str PV.
Update the deployment to add a sidecar container named busybox, which can use busybox image (you might need to add a sleep command for this container to keep it running.)
Share the python-data volume with this container and mount the same at path /usr/src. Make sure this container only has read permissions on this volume.
Finally, create a pod using this YAML and make sure the POD is in Running state.
Note: Additionally, you can expose a NodePort service for the application. The service should be named olive-svc-cka10-str and expose port 5000 with a nodePort value of 32006.
However, inclusion/exclusion of this service won't affect the validation for this task.
My answer:
cluster1-controlplane ~ âžś k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
demo-pv-cka29-trb 100Mi RWX Retain Available <unset> 17m
olive-pv-cka10-str 150Mi RWX Retain Bound default/olive-pvc-cka10-str olive-stc-cka10-str <unset> 17m
peach-pv-cka05-str 150Mi RWO Retain Available <unset> 18m
cluster1-controlplane ~ âžś k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
demo-pvc-cka29-trb Pending demo-pv-cka29-trb 0 <unset> 17m
olive-pvc-cka10-str Bound olive-pv-cka10-str 150Mi RWX olive-stc-cka10-str <unset> 6m15s
- Yaml of PVC
cluster1-controlplane ~ âžś k get pvc olive-pvc-cka10-str -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"olive-pvc-cka10-str","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"100Mi"}},"storageClassName":"olive-stc-cka10-str","volumeMode":"Filesystem","volumeName":"olive-pv-cka10-str"}}
pv.kubernetes.io/bind-completed: "yes"
creationTimestamp: "2025-05-06T06:50:16Z"
finalizers:
- kubernetes.io/pvc-protection
name: olive-pvc-cka10-str
namespace: default
resourceVersion: "5916"
uid: 9d0e3b63-1225-403f-8fd8-fbe98e9c1ccf
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: olive-stc-cka10-str
volumeMode: Filesystem
volumeName: olive-pv-cka10-str
status:
accessModes:
- ReadWriteMany
capacity:
storage: 150Mi
phase: Bound
- Yaml of the pod
cluster1-controlplane ~ âžś k get pod olive-app-cka10-str-68d657fd7c-88wsc -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-05-06T07:07:24Z"
generateName: olive-app-cka10-str-68d657fd7c-
labels:
app: olive-app-cka10-str
pod-template-hash: 68d657fd7c
name: olive-app-cka10-str-68d657fd7c-88wsc
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: olive-app-cka10-str-68d657fd7c
uid: 67dbfff5-352d-481c-a65e-f14174d21e70
resourceVersion: "7517"
uid: d8ada70d-1a73-46aa-84d9-67e1103f8ae3
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- cluster1-controlplane
containers:
- image: poroko/flask-demo-app
imagePullPolicy: Always
name: python
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/
name: python-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-5rml6
readOnly: true
- command:
- sleep
- "3600"
image: busybox
imagePullPolicy: Always
name: busybox
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/src
name: python-data
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-5rml6
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: python-data
persistentVolumeClaim:
claimName: olive-pvc-cka10-str
- name: kube-api-access-5rml6
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-05-06T07:07:24Z"
message: '0/3 nodes are available: 1 node(s) had volume node affinity conflict,
2 node(s) didn''t match Pod''s node affinity/selector. preemption: 0/3 nodes
are available: 3 Preemption is not helpful for scheduling.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
- Checked node label
cluster1-controlplane ~ âžś k get nodes -l kubernetes.io/hostname=cluster1-controlplane
NAME STATUS ROLES AGE VERSION
cluster1-controlplane Ready control-plane 70m v1.32.0
-Describe on the pod
Name: olive-app-cka10-str-68d657fd7c-88wsc
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: app=olive-app-cka10-str
pod-template-hash=68d657fd7c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/olive-app-cka10-str-68d657fd7c
Containers:
python:
Image: poroko/flask-demo-app
Port: 5000/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/ from python-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rml6 (ro)
busybox:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sleep
3600
Environment: <none>
Mounts:
/usr/src from python-data (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rml6 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
python-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: olive-pvc-cka10-str
ReadOnly: false
kube-api-access-5rml6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m29s default-scheduler 0/3 nodes are available: 1 node(s) had volume node affinity conflict, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
I have removed NoSchedule
taint from cluster1-controlplane
as well.
cluster1-controlplane ~ âś– k describe node cluster1-controlplane | grep -i -A1 "taints"
Taints: <none>
Unschedulable: false
I am not sure why it’s not able to schedule pod on node cluster1-controlplane
that has label kubernetes.io/hostname=cluster1-controlplane
.