Issues with CKAD Mock Exam Grading on KodeKloud

Hey community,

I’ve been working through multiple CKAD mock exams on KodeKloud, but I’ve noticed that some questions are being marked incorrect without any clear reason. I’d appreciate some help with the following question, which seems to be problematic:

Question:1
For this question, please set the context to cluster3 by running:

kubectl config use-context cluster3

In this task, we have to create two identical environments that are running different versions of the application. The team decided to use the Blue/green deployment method to deploy a total of 10 application pods which can mitigate common risks such as downtime and rollback capability.

Also, we have to route traffic in such a way that 30% of the traffic is sent to the green-apd environment and the rest is sent to the blue-apdenvironment. All the development processes will happen on cluster 3because it has enough resources for scalability and utility consumption.

Specification details for creating a blue-apd deployment are listed below: -

  1. The name of the deployment is blue-apd.

  2. Use the label type-one: blue.

  3. Use the image kodekloud/webapp-color:v1.

  4. Add labels to the pod type-one: blue and version: v1.

Specification details for creating a green-apd deployment are listed below: -

  1. The name of the deployment is green-apd.

  2. Use the label type-two: green.

  3. Use the image kodekloud/webapp-color:v2.

  4. Add labels to the pod type-two: green and version: v1.

We have to create a service called route-apd-svc for these deployments. Details are here: -

  1. The name of the service is route-apd-svc.

  2. Use the correct service type to access the application from outside the cluster and application should listen on port 8080.

  3. Use the selector label version: v1.

NOTE: - We do not need to increase replicas for the deployments, and all the resources should be created in the default namespace.

You can check the status of the application from the terminal by running the curl command with the following syntax:

curl http://cluster3-controlplane:NODE-PORT

You can SSH into the cluster3 using ssh cluster3-controlplanecommand.

Answer by kodekloud
Run the following command to change the context: -

kubectl config use-context cluster3

In this task, we will use the kubectl command. Here are the steps: -

  1. Use the kubectl create command to create a deployment manifest file as follows: -
kubectl create deployment blue-apd --image=kodekloud/webapp-color:v1 --dry-run=client -o yaml > <FILE-NAME-1>.yaml

Do the same for the other deployment and service.

kubectl create deployment green-apd --image=kodekloud/webapp-color:v2 --dry-run=client -o yaml > <FILE-NAME-2>.yaml
kubectl create service nodeport route-apd-svc --tcp=8080:8080 --dry-run=client -oyaml > <FILE-NAME-3>.yaml
  1. Open the file with any text editor such as vi or nano and make the changes as per given in the specifications. It should look like this: -
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    type-one: blue
  name: blue-apd
spec:
  replicas: 7
  selector:
    matchLabels:
      type-one: blue
      version: v1
  template:
    metadata:
      labels:
        version: v1
        type-one: blue
    spec:
      containers:
        - image: kodekloud/webapp-color:v1
          name: blue-apd

We will deploy a total of 10 application pods. Also, we have to route 70%traffic to blue-apd and 30% traffic to the green-apd deployment according to the task description.

Since the service distributes traffic to all pods equally, we have to set the replica count 7 to the blue-apd deployment so that the given service will send ~70% traffic to the deployment pods.

green-apd deployment should look like this: -

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    type-two: green
  name: green-apd
spec:
  replicas: 3
  selector:
    matchLabels:
      type-two: green
      version: v1
  template:
    metadata:
      labels:
        type-two: green
        version: v1
    spec:
      containers:
        - image: kodekloud/webapp-color:v2
          name: green-apd

route-apd-svc service should look like this: -

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: route-apd-svc
  name: route-apd-svc
spec:
  type: NodePort
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    version: v1
  1. Now, create a deployment and service by using the kubectl create -f command: -
kubectl create -f <FILE-NAME-1>.yaml -f <FILE-NAME-2>.yaml -f <FILE-NAME-3>.yaml

Details

Is blue deployment configured correctly?

Is green deployment configured correctly?

Is service configured correctly?

and this is my solution

student-node ~ ➜  k get deploy
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
blue-apd    7/7     7            7           72m
green-apd   3/3     3            3           70m

student-node ~ ➜  k get deploy blue-apd  -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"blue-apd","type-one":"blue"},"name":"blue-apd","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"blue-apd","type-one":"blue","version":"v1"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"blue-apd","type-one":"blue","version":"v1"}},"spec":{"containers":[{"image":"kodekloud/webapp-color:v1","name":"webapp-color","resources":{}}]}}},"status":{}}
  creationTimestamp: "2024-05-30T02:27:02Z"
  generation: 2
  labels:
    app: blue-apd
    type-one: blue
  name: blue-apd
  namespace: default
  resourceVersion: "18689"
  uid: f727f3a8-260c-4ed2-8d4e-371f6dd6f445
spec:
  progressDeadlineSeconds: 600
  replicas: 7
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: blue-apd
      type-one: blue
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: blue-apd
        type-one: blue
        version: v1
    spec:
      containers:
      - image: kodekloud/webapp-color:v1
        imagePullPolicy: IfNotPresent
        name: webapp-color
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 7
  conditions:
  - lastTransitionTime: "2024-05-30T02:27:02Z"
    lastUpdateTime: "2024-05-30T02:27:08Z"
    message: ReplicaSet "blue-apd-7f65c5fd79" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2024-05-30T02:33:22Z"
    lastUpdateTime: "2024-05-30T02:33:22Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 2
  readyReplicas: 7
  replicas: 7
  updatedReplicas: 7

student-node ~ ➜  k get deploy green-apd  -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"green-apd","type-two":"green"},"name":"green-apd","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"green-apd","type-two":"green","version":"v1"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"green-apd","type-two":"green","version":"v1"}},"spec":{"containers":[{"image":"kodekloud/webapp-color:v2","name":"webapp-color","resources":{}}]}}},"status":{}}
  creationTimestamp: "2024-05-30T02:28:47Z"
  generation: 2
  labels:
    app: green-apd
    type-two: green
  name: green-apd
  namespace: default
  resourceVersion: "18578"
  uid: 1ea6caf5-b05d-41e8-9cc9-ebe9c05cff76
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: green-apd
      type-two: green
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: green-apd
        type-two: green
        version: v1
    spec:
      containers:
      - image: kodekloud/webapp-color:v2
        imagePullPolicy: IfNotPresent
        name: webapp-color
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: "2024-05-30T02:28:47Z"
    lastUpdateTime: "2024-05-30T02:28:50Z"
    message: ReplicaSet "green-apd-d786b9498" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2024-05-30T02:33:07Z"
    lastUpdateTime: "2024-05-30T02:33:07Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 2
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

student-node ~ ➜  k get deploy green-apd  -o yaml^C

student-node ~ ✖ k get svc route-apd-svc -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-05-30T02:30:05Z"
  labels:
    app: route-apd-svc
  name: route-apd-svc
  namespace: default
  resourceVersion: "18358"
  uid: 6d5a62c8-1231-4195-9e3e-626aa39a6e93
spec:
  clusterIP: 10.104.149.224
  clusterIPs:
  - 10.104.149.224
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: "8080"
    nodePort: 32744
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    version: v1
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

student-node ~ ➜  k describe svc route-apd-svc
Name:                     route-apd-svc
Namespace:                default
Labels:                   app=route-apd-svc
Annotations:              <none>
Selector:                 version=v1
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.149.224
IPs:                      10.104.149.224
Port:                     8080  8080/TCP
TargetPort:               8080/TCP
NodePort:                 8080  32744/TCP
Endpoints:                10.244.192.1:8080,10.244.192.10:8080,10.244.192.2:8080 + 7 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

student-node ~ ➜  curl http://cluster3-controlplane:32744
<!doctype html>
<title>Hello from Flask</title>
<body style="background: #16a085;"></body>
<div style="color: #e4e4e4;
    text-align:  center;
    height: 90px;
    vertical-align:  middle;">

  <h1>Hello from green-apd-d786b9498-ppd6m!</h1>


  
  <h2>
    Application Version: v2
  </h2>
  

</div>

there is nothing wrong but grade mark as wrong can I know the reason

Question2:
For this question, please set the context to cluster1 by running:

kubectl config use-context cluster1

Create a ConfigMap named ckad04-config-multi-env-files-aecs in the default namespace from the files provided at /root/ckad04-multi-cmdirectory.

Solution by kodekloud:

student-node ~ ➜  kubectl config use-context cluster1
Switched to context "cluster1".

student-node ~ ➜  kubectl create configmap ckad04-config-multi-env-files-aecs \
         --from-env-file=/root/ckad04-multi-cm/file1.properties \
         --from-env-file=/root/ckad04-multi-cm/file2.properties
configmap/ckad04-config-multi-env-files-aecs created

student-node ~ ➜  k get cm ckad04-config-multi-env-files-aecs -o yaml
apiVersion: v1
data:
  allowed: "true"
  difficulty: fairlyEasy
  exam: ckad
  modetype: openbook
  practice: must
  retries: "2"
kind: ConfigMap
metadata:
  name: ckad04-config-multi-env-files-aecs
  namespace: default

Details

Is ConfigMap created with proper configuration ?

my answer:

k get cm ckad04-config-multi-env-files-aecs -o yaml
apiVersion: v1
data:
file1.properties: |
exam=ckad
retries=2
allowed=true
file2.properties: |
practice=must
modetype=openbook
difficulty=fairlyEasy
kind: ConfigMap
metadata:
creationTimestamp: “2024-05-30T03:10:57Z”
name: ckad04-config-multi-env-files-aecs
namespace: default
resourceVersion: “8547”
uid: eb3312d2-0169-4e2a-a941-3eaf9b963db9

I did
k create cm ckad04-config-multi-env-files-aecs --from-file=/root/ckad04-multi-cm/file1.properties --from-file=/root/ckad04-multi-cm/file2.properties

In the question didn’t mentioned to use env file

I think CKAD tester need to be recheck

Key details I need to find these problems, so I can help: which UME CKAD exam number (between 1 to 5) and what problem number they are in the exam. If I have these, I can look up the problems and see what’s going on.

This is in CKAD-mock-exam-1
and question are always shuffle but although you can gone thought my question there I provided all details

We just updated the UME exams, and believe it or not, they don’t shuffle any more. If a question is Q5 in exam #3, it’s always that now. So the question number and exam number is very useful, so we can find it quickly. I also need the exams, since it allows me to double check your work. Sometimes there are bugs in the questions, but not very many of them; more typically, you did something wrong :slight_smile:

In any rate, your “Q1” is Q5/21

I’m not sure what’s the issue is with the grader, although you did add the app: labels, which should be allowed, but might confuse the grader. Replica numbers are correct, and the right labels match the right labels. The service has the right name and type, and the right selector field.

Your “Q2” is Q15/21

Here you have an error; you should have used --from-env-file rather than --from-file, which is something else. Although the wording of the question is not clear. I’ll write up a ticket on the wording.

Could you please report on both issues and update me on the progress of resolving them?
can you please add those as well :

In both questions, I was able to achieve the end task, so it doesn’t make sense to mark them incorrect. The questions didn’t specify that a particular label was required, and for the CJ question, it was marked wrong because I used command and args, which is incorrect.

hey @rob_kodekloud,
Any update here, test cases are sorted ?

It can take some time for our lab staff to do this, is all.

Hey @rob_kodekloud,
I got another issue

LAB:8 question: 2
For this question, please set the context to cluster1 by running:

kubectl config use-context cluster1

In the ckad-job namespace, schedule a job called learning-every-minute that prints this message in the shell every minute: I am practicing for CKAD certification.

In case the container in pod failed for any reason, it should be restarted automatically.

Use busybox:1.28 image for the cronjob!

Answer of kodekloud
Create a YAML file with the content as below:

apiVersion: batch/v1
kind: CronJob
metadata:
  namespace: ckad-job
  name: learning-every-minute
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: learning-every-minute
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - echo I am practicing for CKAD certification
          restartPolicy: OnFailure

Then use kubectl apply -f file_name.yaml to create the required object.

Details

Is the cronjob learning-every-minute created?

Is the container image busybox:1.28?

Does cronjob run required command?

Does cronjob run every minute?

my answer

student-node ~ ➜  k get cj  -n ckad-job
NAME                    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
learning-every-minute   */1 * * * *   False     0        46s             90m

student-node ~ ➜  k get cj  -n ckad-job -o yaml
apiVersion: v1
items:
- apiVersion: batch/v1
  kind: CronJob
  metadata:
    creationTimestamp: "2024-06-07T02:39:43Z"
    generation: 3
    name: learning-every-minute
    namespace: ckad-job
    resourceVersion: "10071"
    uid: ca179f7c-3f28-43fb-b9ce-a1c76e2dc3d2
  spec:
    concurrencyPolicy: Allow
    failedJobsHistoryLimit: 1
    jobTemplate:
      metadata:
        creationTimestamp: null
        name: learning-every-minute
      spec:
        template:
          metadata:
            creationTimestamp: null
          spec:
            containers:
            - command:
              - /bin/sh
              - -c
              - echo I am practicing for CKAD certification
              image: busybox:1.28
              imagePullPolicy: IfNotPresent
              name: learning-every-minute
              resources: {}
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
            dnsPolicy: ClusterFirst
            restartPolicy: OnFailure
            schedulerName: default-scheduler
            securityContext: {}
            terminationGracePeriodSeconds: 30
    schedule: '*/1 * * * *'
    successfulJobsHistoryLimit: 3
    suspend: false
  status:
    lastScheduleTime: "2024-06-07T04:10:00Z"
    lastSuccessfulTime: "2024-06-07T04:10:03Z"

only expr is changed but it’s same thing

another issue same lab question:20
Task

SECTION: APPLICATION OBSERVABILITY AND MAINTENANCE

Identify the kube api-resources that use api_version=authorization.k8s.io/v1 using kubectl command line interface and store them in /root/api-version.txt on student-node.

Solution

Use the following command to get details:

kubectl api-resources --sort-by=kind | grep -i authorization.k8s.io/v1  > /root/api-version.txt

Details

apiVersion: authorization.k8s.io/v1

Resources identified

student-node ~ ➜  k api-resources | grep authorization.k8s.io/v1 > api-version.txt

student-node ~ ➜  cat api-version.txt
localsubjectaccessreviews                      authorization.k8s.io/v1           true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io/v1           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io/v1           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io/v1           false        SubjectAccessReview
clusterrolebindings                            rbac.authorization.k8s.io/v1      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io/v1      false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io/v1      true         RoleBinding
roles                                          rbac.authorization.k8s.io/v1      true         Role

another issue, maybe last for the day
Question: 10 same lab
For this question, please set the context to cluster1 by running:

kubectl config use-context cluster1

We have deployed two applications called alpha-color-app and beta-color-app on the default namespace using the kodekloud/webapp-color:v1 and kodekloud/webapp-color:v2.

We have done all the tests and do not want alpha-color-app deployment to receive traffic from the frontend-service service anymore. So, route all the traffic to another existing deployment.

Do change the service specifications to route traffic to the beta-color-app deployment.

You can test the application from the terminal by running the curlcommand with the following syntax: -

curl http://cluster1-controlplane:NODE-PORT
<!doctype html>
<title>Hello from Flask</title>
...
  <h2>
    Application Version: v2
  </h2>

As shown above, we will get the Application Version: v2 in the output.

this is my answer

**student-node ~ ➜  kubectl config use-context cluster1
Switched to context "cluster1".

student-node ~ ➜  k get  pod  -n default 
NAME                   READY   STATUS    RESTARTS   AGE
grape-pod-ckad06-str   2/2     Running   0          103m
ckad16-memory-aecs     1/1     Running   0          34m

student-node ~ ➜  k get pods
NAME                   READY   STATUS    RESTARTS   AGE
grape-pod-ckad06-str   2/2     Running   0          103m
ckad16-memory-aecs     1/1     Running   0          34m

student-node ~ ➜  k get pods | grep alpha-color-app

student-node ~ ✖ k get pods -A | grep alpha-color-app
**

if you noticed resources are not deployed this is not first time I faced same thing multiple times

There are only 5 labs. Which did you mean?

There are 8 labs


Also I noticed questions always shuffle (those should not be shuffled)

@mmumshad @mumshadgmail @Alistair_KodeKloud
could you please take a look those

Thanks

Looks like they added 3; there were 5 when the new release initially occurred last week.

Can we fix those issue I can see still we have those issues
@rob_kodekloud @mmumshad @Alistair_KodeKloud @rob_kodekloud

I found the CKA lab experience to be excellent with consistent question presentation(no questions shuffling) and a robust evaluation system.

I recently encountered an issue with a lab where after completing tasks and checking my answers, some questions showed as incomplete despite creating the necessary resources. Upon reviewing my command history and confirming the existence of manifest files, it became apparent that some resources had inexplicably disappeared when I checked the answers.

This is really BAD :neutral_face: