Hey community,
I’ve been working through multiple CKAD mock exams on KodeKloud, but I’ve noticed that some questions are being marked incorrect without any clear reason. I’d appreciate some help with the following question, which seems to be problematic:
Question:1
For this question, please set the context to cluster3
by running:
kubectl config use-context cluster3
In this task, we have to create two identical environments that are running different versions of the application. The team decided to use the Blue/green deployment method to deploy a total of 10 application pods which can mitigate common risks such as downtime and rollback capability.
Also, we have to route traffic in such a way that 30%
of the traffic is sent to the green-apd
environment and the rest is sent to the blue-apd
environment. All the development processes will happen on cluster 3
because it has enough resources for scalability and utility consumption.
Specification details for creating a blue-apd
deployment are listed below: -
-
The name of the deployment is
blue-apd
. -
Use the label
type-one: blue
. -
Use the image
kodekloud/webapp-color:v1
. -
Add labels to the pod
type-one: blue
andversion: v1
.
Specification details for creating a green-apd
deployment are listed below: -
-
The name of the deployment is
green-apd
. -
Use the label
type-two: green
. -
Use the image
kodekloud/webapp-color:v2
. -
Add labels to the pod
type-two: green
andversion: v1
.
We have to create a service called route-apd-svc
for these deployments. Details are here: -
-
The name of the service is
route-apd-svc
. -
Use the correct service type to access the application from outside the cluster and application should listen on port
8080
. -
Use the selector label
version: v1
.
NOTE: - We do not need to increase replicas for the deployments, and all the resources should be created in the default
namespace.
You can check the status of the application from the terminal by running the curl
command with the following syntax:
curl http://cluster3-controlplane:NODE-PORT
You can SSH into the cluster3
using ssh cluster3-controlplane
command.
Answer by kodekloud
Run the following command to change the context: -
kubectl config use-context cluster3
In this task, we will use the kubectl
command. Here are the steps: -
- Use the
kubectl create
command to create a deployment manifest file as follows: -
kubectl create deployment blue-apd --image=kodekloud/webapp-color:v1 --dry-run=client -o yaml > <FILE-NAME-1>.yaml
Do the same for the other deployment and service.
kubectl create deployment green-apd --image=kodekloud/webapp-color:v2 --dry-run=client -o yaml > <FILE-NAME-2>.yaml
kubectl create service nodeport route-apd-svc --tcp=8080:8080 --dry-run=client -oyaml > <FILE-NAME-3>.yaml
- Open the file with any text editor such as
vi
ornano
and make the changes as per given in the specifications. It should look like this: -
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
type-one: blue
name: blue-apd
spec:
replicas: 7
selector:
matchLabels:
type-one: blue
version: v1
template:
metadata:
labels:
version: v1
type-one: blue
spec:
containers:
- image: kodekloud/webapp-color:v1
name: blue-apd
We will deploy a total of 10 application pods. Also, we have to route 70%
traffic to blue-apd
and 30%
traffic to the green-apd
deployment according to the task description.
Since the service distributes traffic to all pods equally, we have to set the replica count 7
to the blue-apd
deployment so that the given service will send ~70%
traffic to the deployment pods.
green-apd
deployment should look like this: -
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
type-two: green
name: green-apd
spec:
replicas: 3
selector:
matchLabels:
type-two: green
version: v1
template:
metadata:
labels:
type-two: green
version: v1
spec:
containers:
- image: kodekloud/webapp-color:v2
name: green-apd
route-apd-svc
service should look like this: -
---
apiVersion: v1
kind: Service
metadata:
labels:
app: route-apd-svc
name: route-apd-svc
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
version: v1
- Now, create a deployment and service by using the
kubectl create -f
command: -
kubectl create -f <FILE-NAME-1>.yaml -f <FILE-NAME-2>.yaml -f <FILE-NAME-3>.yaml
Details
Is blue deployment configured correctly?
Is green deployment configured correctly?
Is service configured correctly?
and this is my solution
student-node ~ ➜ k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
blue-apd 7/7 7 7 72m
green-apd 3/3 3 3 70m
student-node ~ ➜ k get deploy blue-apd -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"blue-apd","type-one":"blue"},"name":"blue-apd","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"blue-apd","type-one":"blue","version":"v1"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"blue-apd","type-one":"blue","version":"v1"}},"spec":{"containers":[{"image":"kodekloud/webapp-color:v1","name":"webapp-color","resources":{}}]}}},"status":{}}
creationTimestamp: "2024-05-30T02:27:02Z"
generation: 2
labels:
app: blue-apd
type-one: blue
name: blue-apd
namespace: default
resourceVersion: "18689"
uid: f727f3a8-260c-4ed2-8d4e-371f6dd6f445
spec:
progressDeadlineSeconds: 600
replicas: 7
revisionHistoryLimit: 10
selector:
matchLabels:
app: blue-apd
type-one: blue
version: v1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: blue-apd
type-one: blue
version: v1
spec:
containers:
- image: kodekloud/webapp-color:v1
imagePullPolicy: IfNotPresent
name: webapp-color
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 7
conditions:
- lastTransitionTime: "2024-05-30T02:27:02Z"
lastUpdateTime: "2024-05-30T02:27:08Z"
message: ReplicaSet "blue-apd-7f65c5fd79" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2024-05-30T02:33:22Z"
lastUpdateTime: "2024-05-30T02:33:22Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 7
replicas: 7
updatedReplicas: 7
student-node ~ ➜ k get deploy green-apd -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"green-apd","type-two":"green"},"name":"green-apd","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"green-apd","type-two":"green","version":"v1"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"green-apd","type-two":"green","version":"v1"}},"spec":{"containers":[{"image":"kodekloud/webapp-color:v2","name":"webapp-color","resources":{}}]}}},"status":{}}
creationTimestamp: "2024-05-30T02:28:47Z"
generation: 2
labels:
app: green-apd
type-two: green
name: green-apd
namespace: default
resourceVersion: "18578"
uid: 1ea6caf5-b05d-41e8-9cc9-ebe9c05cff76
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: green-apd
type-two: green
version: v1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: green-apd
type-two: green
version: v1
spec:
containers:
- image: kodekloud/webapp-color:v2
imagePullPolicy: IfNotPresent
name: webapp-color
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2024-05-30T02:28:47Z"
lastUpdateTime: "2024-05-30T02:28:50Z"
message: ReplicaSet "green-apd-d786b9498" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2024-05-30T02:33:07Z"
lastUpdateTime: "2024-05-30T02:33:07Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 3
replicas: 3
updatedReplicas: 3
student-node ~ ➜ k get deploy green-apd -o yaml^C
student-node ~ ✖ k get svc route-apd-svc -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-05-30T02:30:05Z"
labels:
app: route-apd-svc
name: route-apd-svc
namespace: default
resourceVersion: "18358"
uid: 6d5a62c8-1231-4195-9e3e-626aa39a6e93
spec:
clusterIP: 10.104.149.224
clusterIPs:
- 10.104.149.224
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: "8080"
nodePort: 32744
port: 8080
protocol: TCP
targetPort: 8080
selector:
version: v1
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
student-node ~ ➜ k describe svc route-apd-svc
Name: route-apd-svc
Namespace: default
Labels: app=route-apd-svc
Annotations: <none>
Selector: version=v1
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.104.149.224
IPs: 10.104.149.224
Port: 8080 8080/TCP
TargetPort: 8080/TCP
NodePort: 8080 32744/TCP
Endpoints: 10.244.192.1:8080,10.244.192.10:8080,10.244.192.2:8080 + 7 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
student-node ~ ➜ curl http://cluster3-controlplane:32744
<!doctype html>
<title>Hello from Flask</title>
<body style="background: #16a085;"></body>
<div style="color: #e4e4e4;
text-align: center;
height: 90px;
vertical-align: middle;">
<h1>Hello from green-apd-d786b9498-ppd6m!</h1>
<h2>
Application Version: v2
</h2>
</div>
there is nothing wrong but grade mark as wrong can I know the reason
Question2:
For this question, please set the context to cluster1
by running:
kubectl config use-context cluster1
Create a ConfigMap named ckad04-config-multi-env-files-aecs
in the default
namespace from the files provided at /root/ckad04-multi-cm
directory.
Solution by kodekloud:
student-node ~ ➜ kubectl config use-context cluster1
Switched to context "cluster1".
student-node ~ ➜ kubectl create configmap ckad04-config-multi-env-files-aecs \
--from-env-file=/root/ckad04-multi-cm/file1.properties \
--from-env-file=/root/ckad04-multi-cm/file2.properties
configmap/ckad04-config-multi-env-files-aecs created
student-node ~ ➜ k get cm ckad04-config-multi-env-files-aecs -o yaml
apiVersion: v1
data:
allowed: "true"
difficulty: fairlyEasy
exam: ckad
modetype: openbook
practice: must
retries: "2"
kind: ConfigMap
metadata:
name: ckad04-config-multi-env-files-aecs
namespace: default
Details
Is ConfigMap created with proper configuration ?
my answer:
k get cm ckad04-config-multi-env-files-aecs -o yaml
apiVersion: v1
data:
file1.properties: |
exam=ckad
retries=2
allowed=true
file2.properties: |
practice=must
modetype=openbook
difficulty=fairlyEasy
kind: ConfigMap
metadata:
creationTimestamp: “2024-05-30T03:10:57Z”
name: ckad04-config-multi-env-files-aecs
namespace: default
resourceVersion: “8547”
uid: eb3312d2-0169-4e2a-a941-3eaf9b963db9
I did
k create cm ckad04-config-multi-env-files-aecs --from-file=/root/ckad04-multi-cm/file1.properties --from-file=/root/ckad04-multi-cm/file2.properties
In the question didn’t mentioned to use env file
I think CKAD tester need to be recheck