CKAD Mock Exam 1 - Q.5

I can’t get why my check on deployments goes wrong (svc is green flagged).
Tried many combinations (put container name = blue-apd or green-apd, remove label version: v1 from Deployment layer leaving in the child template\selector…nothing changes)
These are my deployments:

blue_apd:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2024-11-18T15:53:38Z"
  generation: 1
  labels:
    app: blue-apd
    type-one: blue
    version: v1
  name: blue-apd
  namespace: default
  resourceVersion: "4317"
  uid: 61916c77-6db7-441c-a2d5-abdaf9a6de25
spec:
  progressDeadlineSeconds: 600
  replicas: 7
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: blue-apd
      type-one: blue
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: blue-apd
        type-one: blue
        version: v1
    spec:
      containers:
      - image: kodekloud/webapp-color:v1
        imagePullPolicy: IfNotPresent
        name: webapp-color
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 7
  conditions:
  - lastTransitionTime: "2024-11-18T15:53:55Z"
    lastUpdateTime: "2024-11-18T15:53:55Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-11-18T15:53:38Z"
    lastUpdateTime: "2024-11-18T15:53:56Z"
    message: ReplicaSet "blue-apd-7f65c5fd79" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 7
  replicas: 7
  updatedReplicas: 7

green_apd

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2024-11-18T15:53:38Z"
  generation: 1
  labels:
    app: green-apd
    type-two: green
    version: v1
  name: green-apd
  namespace: default
  resourceVersion: "4326"
  uid: d665e350-ada1-4250-aee2-24bf5cf344f8
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: green-apd
      type-two: green
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: green-apd
        type-two: green
        version: v1
    spec:
      containers:
      - image: kodekloud/webapp-color:v2
        imagePullPolicy: IfNotPresent
        name: webapp-color
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: "2024-11-18T15:53:58Z"
    lastUpdateTime: "2024-11-18T15:53:58Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-11-18T15:53:38Z"
    lastUpdateTime: "2024-11-18T15:53:58Z"
    message: ReplicaSet "green-apd-d786b9498" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

Hi @Marco78

It seems there might be a slight misconfiguration with the labels in your Deployments. The labels for the Deployment need to be configured under .metadata.labels. For blue-apd it should only contain type-one: blue and for green-apd should only contain type-two: green.

The labels for Pods are configured in two places within the Deployment: First under.spec.selector.matchLabels This field specifies the labels used to identify the Pods managed by the Deployment. And, in spec.templates.metadata.labels this must match the labels defined in .spec.selector.matchLabels

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    type-one: blue
  name: blue-apd
spec:
  replicas: 7
  selector:
    matchLabels:
      type-one: blue
      version: v1
  template:
    metadata:
      labels:
        version: v1
        type-one: blue
    spec:
      containers:
        - image: kodekloud/webapp-color:v1
          name: blue-apd

Regards.

Ok now I see, i was STRICLTY asking to put useful labels only, but may I ask u why?

It seems just a waste of time to cut out “app: green-apd” which is given by the imperative command output generation itself.

Also, to me, it’s a good practice to keep all labels the same from top to bottom of my yaml and keep consistency, i need a very good reason to not put version even in my root Deployment label, maybe one day that strategic label could be very handy, helping me to select all deployments by that.

Well, what do you think?
At least please, put this in very evidence in the question, otherwise it seems just a cheap way to make students mistake, since the logic behind is enough to work out the question of g\b plus canary scenario.

One way you may think about this is to enforce a better understanding of labels used in Deployment and Pod templates in Deployments.

As labels are one of the key components used in referencing/targetting different objects in Kubernetes. *Labels can be used to organize and to select subsets of objects.

The question specifically asks to label the Deployment and the Pods differently.

Regards.