Anyone tell me best way to restore snapshot which is stored in some location ? I . . .

Piyush:
Anyone tell me best way to restore snapshot which is stored in some location ? I did not see good example …

Mohamed Ayman:
Check this useful repo https://github.com/mmumshad/kubernetes-cka-practice-test-solution-etcd-backup-and-restore

Mahesh Kadam:
Please refer my Notes Here
etcd Backup

  1. Make sure the cluster is running fine.
  2. Note down the required certs from etcd pod.
    a. --cert-file=/etc/kubernetes/pki/etcd/server.crt
    b. --key-file=/etc/kubernetes/pki/etcd/server.key
    c. --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    d. --listen-client-urls=https://127.0.0.1:2379,https://192.168.100.31:2379 # might required in case Q asked to restore on another host
  3. Take the etcd snapshot
    ➜ root@cluster3-master1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \ –cacert /etc/kubernetes/pki/etcd/ca.crt
    –cert /etc/kubernetes/pki/etcd/server.crt
    –key /etc/kubernetes/pki/etcd/server.key
    Snapshot saved at /tmp/etcd-backup.db

Note : save snapshot at /tmp/etcd-backup.db. Do not run snapshot status

  1. Copy the .yamls frile from /etc/kubernetes/manifests to some folder
  2. Now we restore the snapshot in to a specific directory.
    ➜ root@cluster3-master1:~# ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \ -*-data-dir /var/lib/etcd-backup *
    –cacert /etc/kubernetes/pki/etcd/ca.crt
    –cert /etc/kubernetes/pki/etcd/server.crt
    –key /etc/kubernetes/pki/etcd/server.key # IMP to use the cert reference. I was missing this as well
  3. Now modify etcd.yaml and change the path under -hostPath: path: /var/lib/etcd-backup
  4. Wait several mins to start the pods again.
  5. If Pods not comping up restart the kubelet service
    a. systemctl stop kubelet
    b. systemctl start kubelet

Piyush:
Thank you @Mahesh Kadam and @Mohamed Ayman

Piyush:
Still confuse get two different solutions from both of you…
one:

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
     --name=master \
     --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
     --data-dir /var/lib/etcd-from-backup \
     --initial-cluster=master=<https://127.0.0.1:2380> \
     --initial-cluster-token etcd-cluster-1 \
     --initial-advertise-peer-urls=<https://127.0.0.1:2380> \
     snapshot restore /tmp/snapshot-pre-boot.db

second:
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \ -*-data-dir /var/lib/etcd-backup *
–cacert /etc/kubernetes/pki/etcd/ca.crt
–cert /etc/kubernetes/pki/etcd/server.crt
–key /etc/kubernetes/pki/etcd/server.key

I save my snapshot while doing snapshot save to location : get/data/etcd-snapshot.db and I have restored it from /var/lib/etcd-snapshot.db location…so what will be the restore steps.

I am thinking it below way:
ETCDCTL_API=3 etcdctl snapshot restore /var/lib/etcd-snapshot.db
–endpoints=“https://127.0.0.1:2379
–cacert /etc/kubernetes/pki/etcd/ca.crt
–cert /etc/kubernetes/pki/etcd/server.crt
–key /etc/kubernetes/pki/etcd/server.key
Is above solution should work ?

Mahesh Kadam:
mine tested multiple times in all labs related to etcd backup , moc exams etc. I am following that

Mahesh Kadam:
i need to restart the kubelet sometime . However i never found this option in any of the solution. But restarting kubelet works for me everytime

Piyush:
@Thanks @Mahesh Kadam