Dear Team, I am new to Kubernetes and Created a kubernetes cluster as shown in u . . .

Harisai Marisa:
Dear Team, I am new to Kubernetes and Created a kubernetes cluster as shown in udemy course using Kubeadm after that I am not able to deploy some applications with below error

no persistent volumes available for this claim and no storage class is set

I have created pv and pvc, those are working but still facing same issue, seems i may need to change storage class or volumemode

Please help me if there are any regular scripts to create PVs after manual cluster setup.

unnivkn:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim

unnivkn:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Harisai Marisa:
vagrant@kubemaster:~$ k describe pod cluster-example-1-initdb-slfr2
Name: cluster-example-1-initdb-slfr2
Namespace: default
Priority: 0
Node: <none>
Labels: controller-uid=17c51581-9803-4bec-a3c5-60b5cefce5d0
job-name=cluster-example-1-initdb
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: Job/cluster-example-1-initdb
Init Containers:
bootstrap-controller:
Image: http://quay.io/enterprisedb/cloud-native-postgresql:1.11.0|quay.io/enterprisedb/cloud-native-postgresql:1.11.0
Port: <none>
Host Port: <none>
Command:
/manager
bootstrap
/controller/manager
–log-level=info
Environment: <none>
Mounts:
/controller from scratch-data (rw)
/dev/shm from shm (rw)
/etc/app-secret from app-secret (rw)
/etc/superuser-secret from superuser-secret (rw)
/run from scratch-data (rw)
/var/lib/postgresql/data from pgdata (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fjs7d (ro)
Containers:
initdb:
Image: http://quay.io/enterprisedb/postgresql:14.1|quay.io/enterprisedb/postgresql:14.1
Port: <none>
Host Port: <none>
Command:
/controller/manager
instance
init
–parent-node
cluster-example-rw
–initdb-flags
–encoding=UTF8 --lc-collate=C --lc-ctype=C
–app-db-name
app
–app-user
app
–log-level=info
Environment:
PGDATA: /var/lib/postgresql/data/pgdata
POD_NAME: cluster-example-1
NAMESPACE: default
CLUSTER_NAME: cluster-example
PGPORT: 5432
PGHOST: /controller/run
Mounts:
/controller from scratch-data (rw)
/dev/shm from shm (rw)
/etc/app-secret from app-secret (rw)
/etc/superuser-secret from superuser-secret (rw)
/run from scratch-data (rw)
/var/lib/postgresql/data from pgdata (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fjs7d (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
pgdata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cluster-example-1
ReadOnly: false
scratch-data:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit: <unset>
shm:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium: Memory
SizeLimit: <unset>
superuser-secret:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-example-superuser
Optional: false
app-secret:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-example-app
Optional: false
kube-api-access-fjs7d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 68s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

unnivkn:
could you please share your your pod, pv, pvc yaml files

Harisai Marisa:
Its actually custom one made by EDB

Harisai Marisa:
vagrant@kubemaster:~$ k get pvc cluster-example-1 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
http://k8s.enterprisedb.io/nodeSerial|k8s.enterprisedb.io/nodeSerial: “1”
http://k8s.enterprisedb.io/operatorVersion|k8s.enterprisedb.io/operatorVersion: 1.11.0
http://k8s.enterprisedb.io/pvcStatus|k8s.enterprisedb.io/pvcStatus: initializing
creationTimestamp: “2021-12-30T12:10:30Z”
finalizers:

Harisai Marisa:
I believe some issue with volmeMode That is mismatch between the pv and pvc

unnivkn:
Is your pv & pvc is in bound status ? k get pv,pvc . Are you using any storageClass ? is this dynamic provisioning ?

Harisai Marisa:
pv is bound state, pvc is in pending state

Harisai Marisa:
I did manual storage class

unnivkn:
please check why pvc is in pending ? hope you are not creating any pv manually.

Harisai Marisa:
I have created PV manually

Harisai Marisa:
vagrant@kubemaster:~$ k describe pvc cluster-example-1
Name: cluster-example-1
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: http://k8s.enterprisedb.io/cluster=cluster-example|k8s.enterprisedb.io/cluster=cluster-example
Annotations: http://k8s.enterprisedb.io/nodeSerial|k8s.enterprisedb.io/nodeSerial: 1
http://k8s.enterprisedb.io/operatorVersion|k8s.enterprisedb.io/operatorVersion: 1.11.0
http://k8s.enterprisedb.io/pvcStatus|k8s.enterprisedb.io/pvcStatus: initializing
Finalizers: [http://kubernetes.io/pvc-protection|kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: cluster-example-1-initdb-slfr2
Events:
Type Reason Age From Message


Normal FailedBinding 102s (x2 over 112s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
vagrant@kubemaster:~$

unnivkn:
Hi @Harisai Marisa I request you to please go through the topics pv, pvc & storageclass

Harisai Marisa:
Sure will be back after reading those

Harisai Marisa:
I got this. The mistake is I created PV with storage class= standard and the pvc requesting some unknown (at least to me) storage class
So this time I tried with dynamic provisioning and worked

Harisai Marisa:
Now facing new issue :

Harisai Marisa:
Events:
Type Reason Age From Message


Normal Scheduled 3m41s default-scheduler Successfully assigned default/cluster-example-1-initdb-dlhhf to kubenode01
Normal Pulling 3m41s kubelet Pulling image “http://quay.io/enterprisedb/cloud-native-postgresql:1.11.0|quay.io/enterprisedb/cloud-native-postgresql:1.11.0
Normal Pulled 3m22s kubelet Successfully pulled image “http://quay.io/enterprisedb/cloud-native-postgresql:1.11.0|quay.io/enterprisedb/cloud-native-postgresql:1.11.0” in 18.561458087s
Normal Created 3m22s kubelet Created container bootstrap-controller
Normal Started 3m22s kubelet Started container bootstrap-controller
Normal Pulling 3m20s kubelet Pulling image “http://quay.io/enterprisedb/postgresql:14.1|quay.io/enterprisedb/postgresql:14.1
Normal Pulled 2m16s kubelet Successfully pulled image “http://quay.io/enterprisedb/postgresql:14.1|quay.io/enterprisedb/postgresql:14.1” in 1m4.658313264s
Normal Created 2m15s kubelet Created container initdb
Normal Started 2m15s kubelet Started container initdb
Warning FailedMount 2m2s (x5 over 2m9s) kubelet MountVolume.SetUp failed for volume “kube-api-access-x5c5x” : object “default”/“kube-root-ca.crt” not registered
Warning FailedMount 114s (x6 over 2m9s) kubelet MountVolume.SetUp failed for volume “superuser-secret” : object “default”/“cluster-example-superuser” not registered
Warning FailedMount 114s (x6 over 2m9s) kubelet MountVolume.SetUp failed for volume “app-secret” : object “default”/“cluster-example-app” not registered