CKA Mock exam2 : Question 13

Hi Team,

I have a query related to the question mentioned below which mentions NOT to make adjustment to deployment but in the solution it’s modifying deployment itself, which is confusing.

Kindly assist.

Question:
Solve this question on: ssh cluster3-controlplane

Your cluster has a failed deployment named backend-api with multiple pods. Troubleshoot the deployment so that all pods are in a running state. Do not make adjustments to the resource limits defined on the deployment pods.

Solution provided to the question:

SSH into the cluster3-controlplane host
ssh cluster3-controlplane

1. Check Deployment Status
Verify the current status of the deployment:

kubectl get deploy

Expected output (shows that only 2 out of 3 pods are running):

NAME          READY   UP-TO-DATE   AVAILABLE   AGE
backend-api   2/3     2            2           3m48s

The third pod is not getting scheduled.

2. Find the Root Cause
Describe the ReplicaSet to check why the third pod is not being created:

kubectl describe replicasets backend-api-7977bfdbd5

The issue is due to ResourceQuota limitations.
The namespace has a memory limit of 300Mi, and the deployment is exceeding it.
requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
Warning  FailedCreate  Error creating: pods "backend-api-7977bfdbd5-hpcjw" is forbidden: exceeded quota: cpu-mem-quota, 
requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi

3, Modify the Deployment to Fit Within the ResourceQuota
Edit the deployment and reduce memory requests:

kubectl edit deployment backend-api -n default

Modify the resources section:

resources:
  requests:
    cpu: "50m"   # Reduced from 100m to 50m
    memory: "90Mi"   # Reduced from 128Mi to 90Mi (Fits within quota)
  limits:
    cpu: "150m"   
    memory: "150Mi"

4. Verify the Pods (Still Only 2 Running)
Check if the new pod is scheduled:

kubectl get pods -n default

If the third pod is still missing, the old ReplicaSet might be preventing it.

5. Delete the Old ReplicaSet to Allow New Pods to Start
Find the old ReplicaSet:

kubectl get rs -n default

Delete the outdated ReplicaSet:

kubectl delete rs backend-api-7977bfdbd5 -n default

Wait for the new pods to come up.

6. Confirm All Pods Are Running

Thanks,
Sakshi

Hi @sakshibag80

The question specifically asks not to make adjustments to the resource limits. If you look closely at the solution, it actually updates the resources.requests, leaving the limits untouched.

It’s really important to pay attention to these small details because they can be easy to miss and might lead you off track, especially during an exam. Taking a moment to carefully read and understand exactly what’s being asked can make a big difference.

Hi @Santosh_KodeKloud, thanks for your reply.

However, I am not entirely sure that @sakshibag80 has neglected to follow instructions.

I will attempt to paste my solution, so that you can compare it with the solution provided by the exam itself. Please note that I have only adjusted the resources.request section of my deployment. Deleted the replicasets (as they were not allowing my deployments to run).

I was still marked wrong on this exercise, and quite frankly I do not understand what I may have done wrong. Can you possibly explain why?

cluster3-controlplane ~ ➜  k get pods
NAME                           READY   STATUS    RESTARTS   AGE
backend-api-64ffd77d9d-9lth2   1/1     Running   0          7m13s
backend-api-64ffd77d9d-bv2pd   1/1     Running   0          7m12s
backend-api-64ffd77d9d-gkqfw   1/1     Running   0          7m12s

cluster3-controlplane ~ ➜  k describe deployments.apps backend-api 
Name:                   backend-api
Namespace:              default
CreationTimestamp:      Mon, 21 Apr 2025 11:12:57 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=backend-api
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:       app=backend-api
  Annotations:  kubectl.kubernetes.io/restartedAt: 2025-04-21T11:17:53Z
  Containers:
   backend-api:
    Image:      nginx
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     150m
      memory:  150Mi
    Requests:
      cpu:         100m
      memory:      80Mi
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   backend-api-64ffd77d9d (3/3 replicas created)
Events:
  Type    Reason             Age                    From                   Message
  ----    ------             ----                   ----                   -------
  Normal  ScalingReplicaSet  13m                    deployment-controller  Scaled up replica set backend-api-6b6454f5cf from 0 to 3
  Normal  ScalingReplicaSet  11m                    deployment-controller  Scaled up replica set backend-api-78597b8bd5 from 0 to 1
  Normal  ScalingReplicaSet  10m                    deployment-controller  Scaled down replica set backend-api-6b6454f5cf from 3 to 2
  Normal  ScalingReplicaSet  10m                    deployment-controller  Scaled up replica set backend-api-d54c69cd5 from 0 to 1
  Normal  ScalingReplicaSet  9m16s                  deployment-controller  Scaled up replica set backend-api-d54c69cd5 from 1 to 3
  Normal  ScalingReplicaSet  8m52s                  deployment-controller  Scaled down replica set backend-api-78597b8bd5 from 1 to 0
  Normal  ScalingReplicaSet  8m52s                  deployment-controller  Scaled up replica set backend-api-64ffd77d9d from 0 to 1
  Normal  ScalingReplicaSet  8m50s                  deployment-controller  Scaled down replica set backend-api-d54c69cd5 from 3 to 2
  Normal  ScalingReplicaSet  8m50s                  deployment-controller  Scaled up replica set backend-api-64ffd77d9d from 1 to 2
  Normal  ScalingReplicaSet  7m28s (x3 over 8m48s)  deployment-controller  (combined from similar events): Scaled up replica set backend-api-64ffd77d9d from 0 to 3

I tried this question again. The question says to get my pods in a running state, without editing the requests.limits on the deployment (and I don’t think I did). But I was hoping that maybe two options might make my mistake a lot more clearer. My pods were running (without any restarts). Please see cut-out from my terminal:

student-node ~ ✖ ssh cluster3-controlplane


cluster3-controlplane ~ ➜  k get pods
NAME                           READY   STATUS    RESTARTS   AGE
backend-api-78597b8bd5-97npp   1/1     Running   0          19m
backend-api-78597b8bd5-vbvtj   1/1     Running   0          19m
backend-api-78597b8bd5-w5ddv   1/1     Running   0          19m

cluster3-controlplane ~ ➜  k describe deployments.apps backend-api 
Name:                   backend-api
Namespace:              default
CreationTimestamp:      Tue, 22 Apr 2025 21:16:27 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=backend-api
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=backend-api
  Containers:
   backend-api:
    Image:      nginx
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     150m
      memory:  150Mi
    Requests:
      cpu:         100m
      memory:      80Mi
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   backend-api-78597b8bd5 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  28m   deployment-controller  Scaled up replica set backend-api-6b6454f5cf from 0 to 3
  Normal  ScalingReplicaSet  20m   deployment-controller  Scaled up replica set backend-api-78597b8bd5 from 0 to 1
  Normal  ScalingReplicaSet  20m   deployment-controller  Scaled up replica set backend-api-78597b8bd5 from 1 to 3

How you ssh in cluster1-controlplane. Its asking me password to get in. NO password provided in question.

image

@Vigilante

Can you please share the lab name for this?

Thanks but got the resolution. Closed browser >> cleared cache >> restarted test again. No issue then.

Hi @Santosh_KodeKloud

Please did you get an opportunity to look through? Do you think I did anything wrong? I have gone through the labs (specifically for this question) about 10 times now, and I cannot get it to work. I am not adjusting request.cpu simply because the kubectl events do not indicate that there is any issue with my cpu request.

Please I would just like to know if this is an issue with my attempted solution or if it is just an issue with the way the test is being marked. OR am I doing something that is against the best practices.

Thank you very much

Could you share the YAML manifest of the updated file, which was marked as wrong?

Sure. Thanks for taking another look @Santosh_KodeKloud. Please see the result from my kubectl get deployment backend-api -o yaml command. Also notice that the pods were running without any restarts.

in the solution provided after my submission and review, the cpu request is edited downwards. I have chosen not to edit my cpu request because it fits my 3 pods perfectly (so I didn’t see the need to edit. they are each requesting 100m, maximum I can use in the namespace is 300m, and this fits my deployment perfectly.

cluster3-controlplane ~ ➜  k get pods
NAME                           READY   STATUS    RESTARTS   AGE
backend-api-78597b8bd5-fnjj5   1/1     Running   0          2m26s
backend-api-78597b8bd5-lz56m   1/1     Running   0          2m26s
backend-api-78597b8bd5-stlx5   1/1     Running   0          2m26s

cluster3-controlplane ~ ➜  k get deployments.apps backend-api -o yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"backend-api","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"backend-api"}},"template":{"metadata":{"labels":{"app":"backend-api"}},"spec":{"containers":[{"image":"nginx","name":"backend-api","resources":{"limits":{"cpu":"150m","memory":"150Mi"},"requests":{"cpu":"100m","memory":"128Mi"}}}]}}}}
  creationTimestamp: "2025-04-23T22:45:08Z"
  generation: 2
  name: backend-api
  namespace: default
  resourceVersion: "2246"
  uid: 7d66dcf7-4a0b-4f83-b826-ed111fb58d6f
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: backend-api
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: backend-api
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: backend-api
        resources:
          limits:
            cpu: 150m
            memory: 150Mi
          requests:
            cpu: 100m
            memory: 80Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: "2025-04-23T22:46:09Z"
    lastUpdateTime: "2025-04-23T22:46:09Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2025-04-23T22:45:08Z"
    lastUpdateTime: "2025-04-23T22:46:09Z"
    message: ReplicaSet "backend-api-78597b8bd5" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 2
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

cluster3-controlplane ~ ➜  

@rob_kodekloud @Alistair_KodeKloud @Santosh_KodeKloud
Please I am still awaiting clarification for this.

Alright closing this thread. The video solutions for the CKA series Mock Exam was posted today and the video did not change the cpu requests as well. That confirms what I did was right.

Thank you. @Santosh_KodeKloud

Hi @olakunlecrown28 @Santosh_KodeKloud,

I did following steps but still not able to fix the issue. Kindly assist. Thanks !

cluster3-controlplane ~ ➜  k get pods 
NAME                           READY   STATUS    RESTARTS   AGE
backend-api-6b6454f5cf-lcssm   1/1     Running   0          2m33s
backend-api-6b6454f5cf-nt6kx   1/1     Running   0          2m33s

cluster3-controlplane ~ ➜  k get deployments
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
backend-api   2/3     2            2           2m39s

cluster3-controlplane ~ ➜  k get events | grep -i "error" | tail -2
2m45s       Warning   FailedCreate                     replicaset/backend-api-6b6454f5cf   Error creating: pods "backend-api-6b6454f5cf-tlj2d" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
2m36s       Warning   FailedCreate                     replicaset/backend-api-6b6454f5cf   (combined from similar events): Error creating: pods "backend-api-6b6454f5cf-q9cnl" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi

cluster3-controlplane ~ ➜  k edit deployments.apps 
deployment.apps/backend-api edited

cluster3-controlplane ~ ➜  k get  deployments.apps backend-api -o yaml | grep -i -A6 "resources:"
        resources:
          limits:
            cpu: 150m
            memory: 150Mi
          requests:
            cpu: 100m
            memory: 80Mi <=Changed this from 128Mi to 80Mi`

cluster3-controlplane ~ ➜  k get pods 
NAME                           READY   STATUS    RESTARTS   AGE
backend-api-6b6454f5cf-lcssm   1/1     Running   0          4m31s
backend-api-6b6454f5cf-nt6kx   1/1     Running   0          4m31s

cluster3-controlplane ~ ➜  k get deployments.apps 
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
backend-api   2/3     0            2           4m36s

cluster3-controlplane ~ ➜  k get replicasets
NAME                     DESIRED   CURRENT   READY   AGE
backend-api-6b6454f5cf   3         2         2       4m45s
backend-api-78597b8bd5   1         0         0       71s

cluster3-controlplane ~ ➜  k get replicasets --sort-by='{.metadata.creationTimestamp}'
NAME                     DESIRED   CURRENT   READY   AGE
backend-api-6b6454f5cf   3         2         2       9m3s
backend-api-78597b8bd5   1         0         0       5m29s

Solution also mentions to delete the replicasets. I am not sure which replicaset to be deleted.

Kindly assist.

Thanks

Even after deleting the old replicaset , all the pods not running and old replicaset backend-api-78597b8bd5 still exists.

cluster3-controlplane ~ ➜  k get replicasets.apps --sort-by='{.metadata.creationTimestamp}' 
NAME                     DESIRED   CURRENT   READY   AGE
backend-api-6b6454f5cf   3         2         2       58m
backend-api-78597b8bd5   1         0         0       78s

cluster3-controlplane ~ ➜  k delete replicaset --force backend-api-78597b8bd5 
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicaset.apps "backend-api-78597b8bd5" force deleted

cluster3-controlplane ~ ➜  k delete pod -l  app=backend-api
pod "backend-api-6b6454f5cf-86hn6" deleted
pod "backend-api-6b6454f5cf-mts2m" deleted

cluster3-controlplane ~ ➜  k get pods --show-labels
NAME                           READY   STATUS    RESTARTS   AGE   LABELS
backend-api-6b6454f5cf-82xqv   1/1     Running   0          4s    app=backend-api,pod-template-hash=6b6454f5cf
backend-api-6b6454f5cf-95tpn   1/1     Running   0          4s    app=backend-api,pod-template-hash=6b6454f5cf

cluster3-controlplane ~ ➜  k get replicasets.apps --sort-by='{.metadata.creationTimestamp}' 
NAME                     DESIRED   CURRENT   READY   AGE
backend-api-6b6454f5cf   3         2         2       59m
backend-api-78597b8bd5   1         0         0       20s

Hi @olakunlecrown28 , Could you please let me know where are solutions to the questions posted ?

Thanks,
Sakshi

Hi @sakshibag80

I think you might need to log out and log back into your kodekloud learning account. Then select the CKA course. You should notice that there after all the mock exams, videos have been posted for the solution. On my end the solution video was titled: “Mock Exam-2: Step-by-step solutions”

Hi @olakunlecrown28 ,

Thanks for getting back. I think there is some confusion. I am referring to solutions for " Ultimate Certified Kubernetes Administrator (CKA) Mock Exam Series" but I think you are referring to the “CKA” course.

Thanks,
Sakshi

Ah! Gotcha. Just restarted on those. I’ll see if I notice the same thing on those tests.
Thanks for clarifying.

Hi @sakshibag80

I just went through the ultimate series. It appears it’s the exact same question with the exact same solution. So if you watch the solution on the CKA course, it might help.

Cheers.

1 Like

Thanks @olakunlecrown28.

Regards,
Sakshi