Questions on PV and PVC

Hello Team,

I have a question on persistent volume and persistent volume claims

My understand of persistent volume is that it is just a storage that an administrator provisions on the node that can be used to store container related files and container data . Am i correct ?

Persistent volume claims that when you create a POD you basically claim that storage created as part persistent volume for that specific pod to store the data container related data and its files . Am i correct here ?

Another question i have is : When you replicasets like say for an application you need multiple pods . When you claim that persistent volume . So my understand is that each node should have that PV being provisioned for the POD to claim . Am i correct here ?

Yes, a PV/Volume needs to be available on the node to be Claimed by the Pods by way of volumeMounts.

A Deployment creates a ReplicaSet that in the background creates and maintains the specified number of identical Pods. The key here is identical Pods, meaning all the Pods in the RS will be mounted with the same volumeMounts using the same PV/Volumes.

Note that there’s another way of providing storage to Pods using StorageClasses, that provisions PVs dynamically to satisfy the PVCs

Let say i have an application which has 5 pods and i want to run that 5 pods in 5 different nodes. In all the nodes PV should be provisioned for the pods to issue pvc request. CORRECT ?

Yes, If each pod requires storage, you need a Volume provisioned on each node where the pod is scheduled. Alternatively, for cluster-wide storage requirements, you can use a StorageClass, which enables dynamic provisioning of cluster-wide storage.

1 Like

@Santosh_KodeKloud … I have a question on the below

Yes, a PV/Volume needs to be available on the node to be Claimed by the Pods by way of volumeMounts.

A Deployment creates a ReplicaSet that in the background creates and maintains the specified number of identical Pods. The key here is identical Pods, meaning all the Pods in the RS will be mounted with the same volumeMounts using the same PV/Volumes.
This is regarding AKS . There we also we have PV/PVC . But the disk is mounted on only one node . Am i wrong or missing anything ? I just want to understand how the pod when terminated and sceduled on another node would be able to see the same state/data before getting terminated?

If I understand your question, this is what accessMode does. The “Once” or “Many” part of a value like ReadWriteMany indicates whether a given volume will be accessible on a single node or multiple nodes. If you want the data to be shared across multiple nodes, choose one of the “Many” values.

I have two YAML files and i am testing with AKS. Now if i understand from the below command . We are creating PVC . Then we are creating a volume within the container azuredisk01 and mounting it within the container /mnt/azuredisk. The azuredisk01 is mapped to the actual PVC . Am i correct here ?


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azuredisk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: managed-csi

======================

kind: Pod
apiVersion: v1
metadata:
name: nginx-azuredisk
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mcr.microsoft.com/mirror/docker/library/nginx:1.23
name: nginx-azuredisk
command:
- “/bin/sh”
- “-c”
- while true; do echo $(date) >> /mnt/azuredisk/outfile; sleep 1; done
volumeMounts:
- name: azuredisk01
mountPath: “/mnt/azuredisk”
readOnly: false
volumes:
- name: azuredisk01
persistentVolumeClaim:
claimName: pvc-azuredisk

Now the PVC volume that i created is associated to one node of the VMSS .

Now i drain the node and cordon , what will happen to the pod on the other node , will it create another PVC or it will reuse the PVC created ?

There are a couple of Azure docs that are worth reading on this topic, I’d start with this one. Key take-away is that an Azure disk is only available on a single node and are mounted ReadWriteOnce (AzureFiles are used for multiple nodes). So in your example, the disks will necessarily be different disks.