KUBERNETES CHALLENGE 4 problem with configmap

I failed to complete the configmap volume only in Kubernetes Challenges - KodeKloud

failed steps below:

  • Volume Mount - name: ‘conf’, mountPath: ‘/conf’, defaultMode = ‘0755’ (ConfigMap Mount)
  • volumes - name: ‘conf’, Type: ‘ConfigMap’, ConfigMap Name: ‘redis-cluster-configmap’

For me, it looks correct. It may be a bug in the challenge or I am missing something and I cannot see:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
spec:
  selector:
    matchLabels:
      app: redis-cluster # has to match .spec.template.metadata.labels
  serviceName: "redis-cluster"
  replicas: 6 # by default is 1
  minReadySeconds: 10 # by default is 0
  template:
    metadata:
      labels:
        app: redis-cluster # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: data
      - name: conf
        configMap:
          name: redis-cluster-configmap
          defaultMode: 0755
      containers:
      - name: redis
        image: redis:5.0.1-alpine
        command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"]
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        volumeMounts:
        - name: data
          mountPath: /data
          readOnly: false
        - name: conf
          mountPath: /conf
 

  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

Thanks for highlighting this issue. We are working on it. I will update you once it’s fixed.

Regards,
KodeKloud Support

1 Like

We resolved it and updated the context. Please give it another try.

Regards,

I am still getting the exact same error.

Why are you creating a persistent volume claim? We haven’t mentioned in the question.

  1. Is this relevant to the problem I am reporting?
  2. you are asking for a volume mount with volumeclaim. How can I do it without claiming it?

Do I understand the question in the wrong way? If so why the validation works fine for data mount?

volumeClaim is a short hint for volumeClaimTemplates.

Basically volumeClaimTemplates will create a persistent volume claim for each pod so we don’t need to create them separately.

When I remove the persistentvolume claim from pod it works.
but the challenge is buggy. without changing anything with configmap mount is now accepting the solution.
The configmap details should be accepted with or without a mistake from somewhere else.
This is misleading.

Thanks for solving my issue anyhow. :clap:

1 Like

Hi,
I have a problem with the configmap, I attach the events

Events:
Type Reason Age From Message


Warning FailedScheduling 12m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 12m default-scheduler Successfully assigned default/redis-cluster-0 to node01
Warning FailedMount 7m42s kubelet Unable to attach or mount volumes: unmounted volumes=[conf], unattached volumes=[data kube-api-access-94x6l conf]: timed out waiting for the condition
Warning FailedMount 3m11s (x2 over 9m59s) kubelet Unable to attach or mount volumes: unmounted volumes=[conf], unattached volumes=[conf data kube-api-access-94x6l]: timed out waiting for the condition
Warning FailedMount 106s (x13 over 12m) kubelet MountVolume.SetUp failed for volume “conf” : configmap “redis-cluster-configmap” not found
Warning FailedMount 56s (x2 over 5m27s) kubelet Unable to attach or mount volumes: unmounted volumes=[conf], unattached volumes=[kube-api-access-94x6l conf data]: timed out waiting for the condition

ConfigMap no exist

1 Like

Hi @robertomagallanes221,
Thanks for highlighting this. I forwarded this to the team. They will check and fix it asap.

Regards,

Hi @robertomagallanes221,
This issue is fixed. Please try it and let us know if you encounter any issues.

Regards,

Hi @Tej-Singh-Rana , it works

Thanks