Kube-apiserver not starting after enabling audit logging

Hey folks,

This is the last question I’m trying to solve for my CKS course, and I’m stuck. Hoping someone here can help me out.

question:

Now enable auditing in this Kubernetes cluster. Create a new policy file and set it to Metadata level and it will only log events based on the below specifications:

Namespace: prod

Operations: delete

Resources: secrets

Log Path: /var/log/prod-secrets.log

Audit file location: /etc/kubernetes/prod-audit.yaml

Maximum days to keep the logs: 30

my answer
Here’s the audit policy file I created:

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
  namespaces: ["prod"]
  verbs: ["delete"]
  resources:
  - group: ""
    resources: ["secrets"]

I updated my kube-apiserver static pod manifest (/etc/kubernetes/manifests/kube-apiserver.yaml) with these flags:

After making these changes:

  • The kube-apiserver container is not starting.
  • I can’t find any logs under /var/log/containers or with crictl ps -a | grep api.
  • I’ve checked the syntax of the audit file and the manifest, but nothing stands out.

I suspect the API server might be crashing before it even starts the container properly.


:question:Has anyone encountered this before?

  • How can I debug this when I don’t get any logs?

Any help would be appreciated!

Please follow the guidance here for debugging crashed API server.