Hello Team,
Hope you are all doing fine. I did a very unique test case.
Scenario: Pod (vdbench - IO load generator) running with persistent volumes, doing IO continuously on the persistent volumes exposed from storage. Then I logged into the storage array and unmapped the volume and removed it/deleted it. Volume does not exist anymore on the storage.
Result:
- PV and PVC status for the deleted volume still shows as bound. There is a state know as terminating as per K8 docs, shouldn’t it change to “terminating” irrespective of the source of termination whether from storage or from storage provisioner.
- Pods are also in running state. And surprisingly the mount point in the pod still shows the volume as mounted (df -kh) and the data intact/accessible.
- There is no sign of disruption from K8 perspective, however the system logs on the worker node where the volume was exposed or pod was running captures the mount error message attached FR. The CSI plugin pod that is running on the worker node captured it.
The question is shouldn’t the pod fail and the pv status change since the underlying volume by itself does not exist anymore. We are just wondering if the same happens with real application then who will take action on the pods to kill or evacuate as for K8 it is still a happy scenario.