Hello,
I am probably missing something, but I have no clue!
The image is available but the POD deployment fails to pull it.
Very glad if someone can help me
Hello,
I am probably missing something, but I have no clue!
The image is available but the POD deployment fails to pull it.
Very glad if someone can help me
I did exactly what is being described here https://github.com/kodekloudhub/certified-kubernetes-security-specialist-cks-course/blob/main/docs/09-cks-challenges/02-challenge-2.md
I will retry it again
What did you set imagePullPolicy
to in the pod spec? If it is Always
then it will attempt to get it from dockerhub, where it does not exist.
Hello @Alistair_KodeKloud, thanks for your help.
I tried the three policy options âNever, IfNotPresent and Alwaysâ but none of them works.
Thanks!
root@controlplane ~ â docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
kodekloud/webapp-color stable 75a427ac616b 33 seconds ago 51.8MB
busybox latest 66ba00ad3de8 12 months ago 4.87MB
nginx latest 1403e55ab369 12 months ago 142MB
nginx alpine 1e415454686a 12 months ago 40.7MB
k8s.gcr.io/kube-apiserver v1.23.0 e6bf5ddd4098 2 years ago 135MB
k8s.gcr.io/kube-scheduler v1.23.0 56c5af1d00b5 2 years ago 53.5MB
k8s.gcr.io/kube-controller-manager v1.23.0 37c6aeb3663b 2 years ago 125MB
k8s.gcr.io/kube-proxy v1.23.0 e03484a90585 2 years ago 112MB
python 3.6-alpine 3a9e80fa4606 2 years ago 40.7MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 2 years ago 293MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 2 years ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 2 years ago 683kB
weaveworks/weave-npc 2.8.1 7f92d556d4ff 2 years ago 39.3MB
weaveworks/weave-kube 2.8.1 df29c0a4002c 2 years ago 89MB
quay.io/coreos/flannel v0.13.1-rc1 f03a23d55e57 3 years ago 64.6MB
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 3 years ago 52.8MB
kodekloud/fluent-ui-running latest bd30270a8b9a 5 years ago 969MB
kodekloud/webapp-color latest 32a1ce4c22f2 5 years ago 84.8MB
root@controlplane ~ â cat dev-webapp.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: dev-webapp
name: dev-webapp
namespace: dev
spec:
nodeName: controlplane
containers:
- env:
- name: APP_COLOR
value: darkblue
image: kodekloud/webapp-color:stable
imagePullPolicy: Never
name: webapp-color
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
runAsUser: 0
startupProbe:
exec:
command:
- rm
- /bin/sh
- /bin/ash
initialDelaySeconds: 5
periodSeconds: 5
root@controlplane ~ â k get po -n dev
NAME READY STATUS RESTARTS AGE
dev-webapp 1/1 Running 0 2m17s
root@controlplane ~ â date
Sat Jan 6 14:14:15 UTC 2024
If you have not specifically told the pod to schedule on controlplane
, and it gets scheduled to node01
then you will get this error, because the image you have built exists only on controlplane
. You could also export the image, SCP it to node01
and import it there, then the pod would schedule on either node.
This is CKS - youâre expected to know this sort of thing about clusters without having it explicitly taught.
Itâs generally recommended to attempt advanced level certifications when you work with the subject matter on a daily basis.
@Alistair_KodeKloud, thanks once again for your help.
You are definitely right, after comparing your config with mine I noticed I was missing the node selector to the controlplane.
Indeed, itâs CKS and I should know it, but again thanks for your help!
Youâre welcome! Happy learning,
Hi,
@Alistair_KodeKloud what is going on with CKS Challenges? I am having a really bad experience with them so far. Instructions are wrong, for example, I followed everything you said in this post and still is getting error.
controlplane ~ â docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
kodekloud/webapp-color stable 8ab896cf4cc8 12 minutes ago 51.9MB
goharbor/harbor-exporter v2.8.4 b8d33e28ec68 17 months ago 97.7MB
goharbor/redis-photon v2.8.4 7b7324d651ca 17 months ago 120MB
goharbor/trivy-adapter-photon v2.8.4 91d8e9f0b21a 17 months ago 464MB
goharbor/notary-server-photon v2.8.4 a46f91560454 17 months ago 113MB
goharbor/notary-signer-photon v2.8.4 da66bd8d944b 17 months ago 110MB
goharbor/harbor-registryctl v2.8.4 805b38ca6bee 17 months ago 141MB
goharbor/registry-photon v2.8.4 756769e94123 17 months ago 79MB
goharbor/nginx-photon v2.8.4 375018db778b 17 months ago 116MB
goharbor/harbor-log v2.8.4 8a2045fb24d2 17 months ago 124MB
goharbor/harbor-jobservice v2.8.4 97808fc10f64 17 months ago 141MB
goharbor/harbor-core v2.8.4 c26fcd0714d8 17 months ago 164MB
goharbor/harbor-portal v2.8.4 4a8b0205c0f9 17 months ago 124MB
goharbor/harbor-db v2.8.4 5b8af16d7420 17 months ago 174MB
goharbor/prepare v2.8.4 bdbf974d86ce 17 months ago 166MB
python 3.6-alpine 3a9e80fa4606 3 years ago 40.7MB
controlplane ~ â cat dev-webapp.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: dev-webapp
name: dev-webapp
namespace: dev
spec:
nodeName: controlplane
containers:
- env:
- name: APP_COLOR
value: darkblue
image: kodekloud/webapp-color:stable
imagePullPolicy: Never
name: webapp-color
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
runAsUser: 0
startupProbe:
exec:
command:
- rm
- /bin/sh
- /bin/ash
initialDelaySeconds: 5
periodSeconds: 5
controlplane ~ â k describe pod -n dev dev-webapp
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 3m4s (x12 over 5m14s) kubelet Error: ErrImageNeverPull
Warning ErrImageNeverPull 4s (x26 over 5m14s) kubelet Container image "kodekloud/webapp-color:stable" is not present with pull policy of Never
Almost certainly, youâve done something wrong; the instructions in the challenge for the Dockerfile say:
Run as non root(instead, use correct application user)
Avoid exposing unnecessary ports
Avoid copying the âDockerfileâ and other unnecessary files and directories in to the image. Move the required files and directories (app.py, requirements.txt and the templates directory) to a subdirectory called âappâ under âwebappâ and update the COPY instruction in the âDockerfileâ accordingly.
Once the security issues are fixed, rebuild this image locally with the tag âcontrolplane:32766/kodekloud/webapp-color:stableâ and push it to the private registry using the
/root/push-to-registry.sh
script.
If you did this correctly, then you should see this:
controlplane ~ â docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
controlplane:32766/kodekloud/webapp-color stable 205284381cc4 4 minutes ago 51.9MB
goharbor/harbor-exporter v2.8.4 b8d33e28ec68 17 months ago 97.7MB
goharbor/redis-photon v2.8.4 7b7324d651ca 17 months ago 120MB
goharbor/trivy-adapter-photon v2.8.4 91d8e9f0b21a 17 months ago 464MB
goharbor/notary-server-photon v2.8.4 a46f91560454 17 months ago 113MB
goharbor/notary-signer-photon v2.8.4 da66bd8d944b 17 months ago 110MB
goharbor/harbor-registryctl v2.8.4 805b38ca6bee 17 months ago 141MB
goharbor/registry-photon v2.8.4 756769e94123 17 months ago 79MB
goharbor/nginx-photon v2.8.4 375018db778b 17 months ago 116MB
goharbor/harbor-log v2.8.4 8a2045fb24d2 17 months ago 124MB
goharbor/harbor-jobservice v2.8.4 97808fc10f64 17 months ago 141MB
goharbor/harbor-core v2.8.4 c26fcd0714d8 17 months ago 164MB
goharbor/harbor-portal v2.8.4 4a8b0205c0f9 17 months ago 124MB
goharbor/harbor-db v2.8.4 5b8af16d7420 17 months ago 174MB
goharbor/prepare v2.8.4 bdbf974d86ce 17 months ago 166MB
python 3.6-alpine 3a9e80fa4606 3 years ago 40.7MB
Once youâve done this, fixing dev-webapp and staging-webapp requires you to do a number of things you did not do above. Note Iâve removed imagePullPolicy: Never
, since we need to load the image from the local docker repository:
controlplane ~ â cat dev-webapp.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: dev-webapp
name: dev-webapp
namespace: dev
spec:
containers:
- env:
- name: APP_COLOR
value: darkblue
image: controlplane:32766/kodekloud/webapp-color:stable
#imagePullPolicy: Never
name: webapp-color
resources: {}
securityContext:
runAsUser: 0
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
startupProbe:
exec:
command:
- rm
- /bin/sh
- /bin/ash
initialDelaySeconds: 5
periodSeconds: 5
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-z4lvb
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-z4lvb
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
Yes, I did this prior reaching this post, I mean, I followed exactly what the Challenge 2 was asking for.
Then I arrived here and I saw you were not using the private registry. Tried to follow the same way and did not work.
Maybe the requirements changed from the time this post was created.
The private registry was complaining about certificate. I will try again and will write here.
@rob_kodekloud now itâs working.
As I said, I was receiving a certificate error
from pod events.
This really bothers who is doing the Challenges, once sometimes is difficult to identify if we are doing something wrong or if the Challenges are broken. My opinion is that Kode Kloud should review all CKS material, I am a paying student and can ensure that there are several outreviewed labs.
If anyone is still having troubles getting the image to run:
I experienced the same error, even though I ran the Pod on the controlplane node and had ImagePullPolicy: Never
set in the Pod manifest.
I noticed that when I ran docker images
the image was showing up. However, when I ran crictl images
the image was not showing up. So I ran crictl pull controlplane:32766/kodekloud/webapp-color:stable
and redeployed the Pod.
This solved the problem.
I hope this helps!