CreateContainerConfigError after K8s upgrade to version 1.31

Hi! After upgrading the first ControlPlane server to K8s version 1.31 and rebooting it, all its pods originated from daemonsets are in the “CreateContainerConfigError” state, instead of the “Running” state. The error message shown in them, as well as in the kubelet service, is: “Error: services have not yet been read at least once, cannot construct envvars”. I’ve never gone through this kind of trouble in previous K8s version upgrades, since 1.27. Does anyone know the reason and the fix for this? Thank you!

I just tested downgrading the kubelet to version 1.30 and it worked as a workaround, but I just can’t understand yet why updating it to version 1.31 is causing these errors.

Hi @mca_75

Is this with some KodeKloud lab or playground?

It seems the containers are unable to get the required configurations to start. Either some ConfigMaps/Secrets/ may be missing. You can check the container logs for more details.

Hi, @Santosh_KodeKloud! It occurs in a K8s cluster with multiple controlplanes that I have at my work place. It doesn’t seem that any ConfigMap ou Secret is missing and, as I just said before, when I downgrade the Kubelet from version 1.31 to 1.30, the problem vanishes. Therefore, the problem arises after kubelet upgrade from 1.30 to 1.31. I also tried the same upgrade in a KodeKloud lab and this problem does not arise. I’ll take a look at the logs as you suggested. Thank you very much!

I found this web page where this same problem I’m facing with the upgrade to version 1.31 seems to be mentioned: The ungleich kubernetes infrastructure - Open Infrastructure - ungleich redmine. Below, I’m placing the exact excerpt:

Upgrading to 1.31
Cluster needs to updated FIRST before kubelet/the OS
Otherwise you run into errors in the pod like this:

Warning Failed 11s (x3 over 12s) kubelet Error: services have not yet been read at least once, cannot construct envvars
And the resulting pod state is:

Init:CreateContainerConfigError
Fix:

find an old 1.30 kubelet package, downgrade kubelet, upgrade the control plane, upgrade kubelet again
wget https://mirror.ungleich.ch/mirror/packages/alpine/v3.20/community/x86_64/kubelet-1.30.0-r3.apk
wget https://mirror.ungleich.ch/mirror/packages/alpine/v3.20/community/x86_64/kubelet-openrc-1.30.0-r3.apk
apk add ./kubelet-1.30.0-r3.apk ./kubelet-openrc-1.30.0-r3.apk
/etc/init.d/kubelet restart
Then upgrade:

/usr/local/bin/kubeadm-v1.31.3 upgrade apply -y v1.31.3
Then re-upgrade the kubelet:

apk upgrade -a

As said in The ungleich kubernetes infrastructure - Open Infrastructure - ungleich redmine, in order to make the problem go away, it is important to upgrade kubeadm version and call “kubeadm upgrade” first in ALL controlplane nodes, before beginning to upgrade their kubectl and kubelet. I just tested it and it worked.

@Santosh_KodeKloud