Kube-apiserver Docker Troubeshoot Starting

I had this happen to me a couple of times:

I change something in /etc/kubernetes/manifests/kube-apiserver.yaml and check for the api process. I see that Docker container exited with error code 1. I check the logs of the container and all there is is a single line with:

Shutting down, got signal: Terminated

I don’t know where to begin in troubleshooting this as there’s nowhere to start. In a lab environment I just recreate the cluster but I’m afraid this might happen in a production environment.

How can I troubleshoot a kube-apiserver that fails to start like this (with no exit reason besides the code) and that is deployed with kubeadm and such is running in a container?

Hello kjenney,

you should care when you add a change to the kube-api. you should remove these changes and the API will be up and running again. Also, you can use docker ps -a , then docker logs api-container to check the logs.

Thanks,
KodeKloud Support

What if you’re not the one who made the change that broke kube-api? How do you revert to a previous version reliably without version control?