Hi,
I’m following the CKS course and I’ve got to the ABAC authorization lab… I’ve set the following within kube-apiserver.yaml:
- --authorization-policy-file=/etc/kubernetes/abac/abac-policy.jsonl
- --authorization-mode=Node,RBAC,ABAC
As you can see, the file exists and it exists in the position suggested by the lab:
ls -l /etc/kubernetes/abac/abac-policy.jsonl
-rw-r--r-- 1 root root 205 Nov 2 20:11 /etc/kubernetes/abac/abac-policy.jsonl
Kube-apiserver keeps restarting and complaining about file not found:
crictl logs 1b85e0f1f2a51
I1102 20:13:53.478035 1 options.go:228] external host was not specified, using 192.28.245.6
I1102 20:13:53.479634 1 server.go:142] Version: v1.31.0
I1102 20:13:53.479665 1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E1102 20:13:54.132339 1 run.go:72] "command failed" err="invalid authorization config: open /etc/kubernetes/abac/abac-policy.jsonl: no such file or directory"
I1102 20:13:54.132346 1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
What am I missing?
Did you mount the file /etc/kubernetes/abac/abac-policy.jsonl into the container? There would need to be volumes and volumeMounts clauses to do that.
1 Like
You’re definitely right, I should mount the folder with the jsonl file in the container as it happens for the folder with pki certificates. I’ll redo the lab today, another lesson learned: when you’re too tired, have a break and come back later
EDIT: just added the necessary host mount for /etc/kubernetes/abac
and everything worked as expected.
I have another small question about the same lab… at the end we modify abac policy by changing readonly from true to false. If we change a valute in the kube-apiserver.yaml file it restarts the kube-apiserver-controlplane pod and reloads the abac policy. Why doesn’t it work if we simply delete the kube-apiserver-controlplane pod with kubectl delete pod -n kube-system -l component=kube-apiserver
? Shouldn’t the kube-apiserver restart and re-read the file even if I simply delete it?
I’m not sure why deleting the pod would not serve to restart it, TBH. But I’m sure that restarting kubelet will, so that’s what I recommend.
@gianni.costanzi
Remember your CKA . Remember that you cannot delete static pods with kubectl delete
and that the only ways to recycle a static pod are either
- Change something in the manifest, or
- Move the manifest file out of the manifest directory until kubelet deletes the pod, then move the manifest file back in again.
Restarting kubelet will only serve to hurry along one of the above 2 options.
1 Like
Thank you, I was convinced that kubectl delete would allow a simple restart of the static pod and moving away the file was necessary to permanently stop (not restart) the pod.