@Alistair Mackay In CKA course Storage PV Lan, I created a volume (not PV) and . . .

Kishore Ram:
@Alistair Mackay In CKA course Storage PV Lan, I created a volume (not PV) and mounted to container path /log to store the app.log data. Host mount is /var/lib/webapp. As per the definition of normal volume, the data exists on the host path as long as the Pod is alive/exists. But, when I deleted the pod and saw the data app.log still exists in /var/lib/webapp. Why? I am still not able to understand the difference between normal volume and persistent volume?

Trung Tran:
Can you share your POD yaml?

Kishore Ram:

apiVersion: v1
kind: Pod
metadata:
  name: webapp
spec:
  containers:
  - env:
    - name: LOG_HANDLERS
      value: file
    image: kodekloud/event-simulator
    name: event-simulator
    volumeMounts:
    - mountPath: /log
      name: log-volume
  volumes:
  - name: log-volume
    hostPath:
      path: /var/log/webapp
      type: Directory

Trung Tran:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Detail about hostPath volume can be found here, but basically, this volume type map the directory in the host (node where pod running) into the pod. Let say, the container in pod has access to a directory in the node.
“A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.”
When you delete the pod, the directory in the host still available, that why you still see it.

Kishore Ram:
So in this case, we dont see much of a different between volume and persistent volume in terms of availability of data, right?

Aneek Bera:
A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. So, if you have a multi-node cluster, the pod is restarted for some reasons and assigned to another node, the new node won’t have the old data on the same path. That’s why we have seen, that hostPath volumes work well only on single-node clusters.

Here, the Kubernetes local persistent volumes help us to overcome the restriction and we can work in a multi-node environment with no problems. It remembers which node was used for provisioning the volume, thus making sure that a restarting POD always will find the data storage in the state it had left it before the reboot.

Kishore Ram:
Thanks very much @Trung Tran and @Aneek Bera

Aneek Bera:
Example: https://www.bswen.com/2020/12/others-local-vs-hostpath-k8s.html

Kishore Ram:
Lets say if we mounted a local PV to a deployment with 5 replica pods. In this case, will all pods have access to the same mount path? How does the data write happen? All 5 pods cannot write the data at the same time on the mount point right?

Kishore Ram:
@Aneek Bera Lets take local PV example, a local partition say /dev/sda2 is on Node1 and dedicated for a PV and there is a pod assigned with this pv also residing in Node1. Since its a multi node cluster, say if Node1 is failed, the Pod (deployment) is recreated on Node2, but what will happen to the data residing on Node1 PV /dev/sda1?

Aneek Bera:
@Kishore Ram It is possible to create a PVC with ReadWriteOnce access mode, and then to create a deployment which runs a stateful application and use this PVC. It works perfectly fine, but only if you don’t want to scale your deployment. If you try to do it, you will probably get an error that volume is already in use when pod starts on another node. Even if that is not the case, and both pods end up on the same node, still they will write to the same volume. So you don’t want this.

If you try to scale this deployment, other replicas will try to mount and use the same volume. It is okay if your volume is read-only. So, how to work around it for read-write volumes?

You define a statefulset with the PVC.

When you have an app which requires persistence, you should create a stateful set instead of deployment. There are many benefits. Also, you will not have to create a PVCs in advance, and you will be able to scale it easily. Of course, the scaling depends on the app you are deploying. With the stateful set, you can define a volumeClaimTemplates so that a new PVC is created for each replica automatically. Also, you will end up with only one file which defines your app and also persistent volumes.

For each new replica, the stateful set will create a separate volume. Also, this way it is much easier to manage pods and PVCs at the same time.

Kishore Ram:
Excellent. Thanks Aneek. One last query, when each replica has its own PV (stateful set), will the data in all the PVs gets mirrored automatically?

Aneek Bera:
@Kishore Ram No. All the data will not be mirrored automatically. The data will remain in place and a new re-created pod will be able to access it. Now, if you want to mirror the data automatically we need to make some tricks and turns. One way is to create redundancy of mySQL DB in the system. in this way, data gets replicated on different pods (although they will use separate PVC). Or if you want to use multi-redundancy then better to use replication controller and add specifications to it.
This kind of architecture is used only and rarely in very high availability systems.

Aneek Bera:
But remember, those redundancies are based on software and it’s implementations