Good day,
Please I need clarity with why I was marked wrong. sincere apology for the log post. I was just trying to be detailed.
Question:
Solve this question on: ssh cluster1-controlplane
We want to deploy a Python-based application on the cluster
using a template located at /root/olive-app-cka10-str.yaml
on cluster1-controlplane. However, before you proceed, we need
to make some modifications to the YAML file as per the
following details:
The YAML should also contain a persistent volume claim with
name olive-pvc-cka10-str to claim a 100Mi of storage from
olive-pv-cka10-str PV.
Update the deployment to add a sidecar container named
busybox, which can use busybox image (you might need to
add a sleep command for this container to keep it running.)
Share the python-data volume with this container and
mount the same at path /usr/src. Make sure this container
only has read permissions on this volume.
Finally, create a pod using this YAML and make sure the POD
is in Running state.
Note: Additionally, you can expose a NodePort service for the
application. The service should be named olive-svc-cka10-str
and expose port 5000 with a nodePort value of 32006.
However, inclusion/exclusion of this service won't affect the
validation for this task.
based on those instructions, I did these:
cluster1-controlplane ~ ➜ cat /root/olive-app-cka10-str.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: olive-app-cka10-str
spec:
replicas: 1
template:
metadata:
labels:
app: olive-app-cka10-str
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- cluster1-node01
containers:
- name: python
image: poroko/flask-demo-app
ports:
- containerPort: 5000
volumeMounts:
- name: python-data
mountPath: /usr/share/
- command:
- sleep
- "6000"
image: busybox
name: busybox
volumeMounts:
- name: python-data
mountPath: /usr/src/
readOnly: true
volumes:
- name: python-data
persistentVolumeClaim:
claimName: olive-pvc-cka10-str
selector:
matchLabels:
app: olive-app-cka10-str
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: olive-pvc-cka10-str
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
volumeName: olive-pv-cka10-str
storageClassName: olive-stc-cka10-str
cluster1-controlplane ~ ➜ k get pods olive-app-cka10-str-65d7547477-gfcwz
NAME READY STATUS RESTARTS AGE
olive-app-cka10-str-65d7547477-gfcwz 2/2 Running 0 44m
cluster1-controlplane ~ ➜ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cluster1-controlplane Ready control-plane 136m v1.32.0 192.168.78.152 <none> Ubuntu 22.04.5 LTS 5.15.0-1078-gcp containerd://1.6.26
cluster1-node01 Ready <none> 135m v1.32.0 192.168.78.166 <none> Ubuntu 22.04.4 LTS 5.15.0-1078-gcp containerd://1.6.26
cluster1-node02 Ready <none> 135m v1.32.0 192.168.231.178 <none> Ubuntu 22.04.4 LTS 5.15.0-1081-gcp containerd://1.6.26
cluster1-controlplane ~ ➜ curl 192.168.78.152:32006 #notice how I get a response from the pod
Hello World Pyvo 1!
cluster1-controlplane ~ ➜ k exec -it olive-app-cka10-str-65d7547477-gfcwz -c busybox -- ls /usr/src
cluster1-controlplane ~ ➜ k exec -it olive-app-cka10-str-65d7547477-gfcwz -c busybox -- ls /usr
bin sbin src
cluster1-controlplane ~ ➜ k exec -it olive-app-cka10-str-65d7547477-gfcwz -c python -- ls /usr/share
cluster1-controlplane ~ ➜ k exec -it olive-app-cka10-str-65d7547477-gfcwz -c python -- ls /usr
bin games include lib lib64 libexec local sbin share src tmp
cluster1-controlplane ~ ➜ k get pods olive-app-cka10-str-65d7547477-gfcwz -oyaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: ac8232b14ef1965b780e4090e18394c932cbc3ce2b0fbba2f8c6185a2132e040
cni.projectcalico.org/podIP: 172.17.1.13/32
cni.projectcalico.org/podIPs: 172.17.1.13/32
creationTimestamp: "2025-05-05T20:19:05Z"
generateName: olive-app-cka10-str-65d7547477-
labels:
app: olive-app-cka10-str
pod-template-hash: 65d7547477
name: olive-app-cka10-str-65d7547477-gfcwz
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: olive-app-cka10-str-65d7547477
uid: 8c103bd8-9c19-4784-a608-52f2edb9efda
resourceVersion: "9688"
uid: 4b1c2e0c-2aa7-4fdf-95a7-38af35960c6f
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- cluster1-node01
containers:
- image: poroko/flask-demo-app
imagePullPolicy: Always
name: python
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/
name: python-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-rp6lc
readOnly: true
- command:
- sleep
- "6000"
image: busybox
imagePullPolicy: Always
name: busybox
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/src/
name: python-data
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-rp6lc
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: cluster1-node01
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: python-data
persistentVolumeClaim:
claimName: olive-pvc-cka10-str
- name: kube-api-access-rp6lc
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-05-05T20:19:22Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-05-05T20:19:12Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-05-05T20:19:22Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-05-05T20:19:22Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-05-05T20:19:12Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://ab83807fe7a836f3f7cb9c4e752f2b930850508e766f962903fd672e10e118cd
image: docker.io/library/busybox:latest
imageID: docker.io/library/busybox@sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f
lastState: {}
name: busybox
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2025-05-05T20:19:21Z"
volumeMounts:
- mountPath: /usr/src/
name: python-data
readOnly: true
recursiveReadOnly: Disabled
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-rp6lc
readOnly: true
recursiveReadOnly: Disabled
- containerID: containerd://20f59fcedfb79e7a22e33a73c6a3464a327df0d99e1a14a7dfc276cced6c29c1
image: docker.io/poroko/flask-demo-app:latest
imageID: docker.io/poroko/flask-demo-app@sha256:c52bfc42b8766566df7da383f6e08d77f56ca022c7ea94fce51aa5b7ef66639b
lastState: {}
name: python
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2025-05-05T20:19:21Z"
volumeMounts:
- mountPath: /usr/share/
name: python-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-rp6lc
readOnly: true
recursiveReadOnly: Disabled
hostIP: 192.168.78.166
hostIPs:
- ip: 192.168.78.166
phase: Running
podIP: 172.17.1.13
podIPs:
- ip: 172.17.1.13
qosClass: BestEffort
startTime: "2025-05-05T20:19:12Z"
I was marked wrong, so I decided to also paste the proposed solution here as well:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: olive-pvc-cka10-str
spec:
accessModes:
- ReadWriteMany
storageClassName: olive-stc-cka10-str
volumeName: olive-pv-cka10-str
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: olive-app-cka10-str
spec:
replicas: 1
template:
metadata:
labels:
app: olive-app-cka10-str
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- cluster1-node01
containers:
- name: python
image: poroko/flask-demo-app
ports:
- containerPort: 5000
volumeMounts:
- name: python-data
mountPath: /usr/share/
- name: busybox
image: busybox
command:
- "bin/sh"
- "-c"
- "sleep 10000"
volumeMounts:
- name: python-data
mountPath: "/usr/src"
readOnly: true
volumes:
- name: python-data
persistentVolumeClaim:
claimName: olive-pvc-cka10-str
selector:
matchLabels:
app: olive-app-cka10-str
---
apiVersion: v1
kind: Service
metadata:
name: olive-svc-cka10-str
spec:
type: NodePort
ports:
- port: 5000
nodePort: 32006
selector:
app: olive-app-cka10-str
Details of my test results for the question:
Is the olive-pvc-cka10-str
PVC created?
Is the main container created as per the requirements?
Is the sidecar container created as per the requirements?
Is the pod in the running
state?