Declaring volume in docker file - behaviour in kubernetes world

Hi Experts,

If I declare a volume in docker file itself and then create image, I noticed that docker creates the volume on the host without me having to specify “-v” during docker run command. And if stop start container , then it binds to same volume. but if I destroy and create new container ( same name ), a new volume is created.

Bringing the same image to Kubernetes world , how will that behave? I see it honors the volume declaration , but I can not see any volumes under storage created . Does it get created in the node ( host ) implicitly ? We don’t have start and stop container in k8s world. Does that mean, it will leave stale volume residue on the host , every time, the pod is created from the image ? Or it will destroy the volume implicitly, once the pod is terminated. FYI i dont see any volume mounts when i do a describe pod , or try to see the container inside the pod.

Thanks
Aditya

I don’t have a convenient lab to try this on, but what should probably happen is that when kubelet presents the image to the container runtime, then the container runtime (not always docker, and never docker since kubernetes 1.24) should do what it would do not in kubernetes - which is, create the volume locally on the node, and probably clean it up on termination.
Kubernetes itself will know nothing of any volume created this way, because kubernetes didn’t create it.

Best not to create images with VOLUME statement in them if intended for kubernetes. Use bind mounts to simulate kubernetes volumes if running image locally in docker for testing.