I have problems to take a snaphot backup with: `ETCDCTL_API=3 etcdctl snapshot s . . .

Michael:
I have problems to take a snaphot backup with:
ETCDCTL_API=3 etcdctl snapshot save snapshotdb
after 5 minutes and more I still have an empty snapshot.db.part in the directory. What am I doing wrong? I’ve read the documentation about it, but so far I can’t find what I’m forgetting. I think this is as described in the first part of 128 - ‘Backup and Restore methods’

Mohamed Ayman:
Hello @Michael,
Try the following steps in this link https://github.com/mmumshad/kubernetes-cka-practice-test-solution-etcd-backup-and-restore

Michael:
Ok, thats really. Thank you very much for the help, hopefully I’ll get that in the Kubernetes documentation for the exam.

Michael:
Well, without the paths I could have found this in the k8s documentation with a little more scrolling:
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/

Akshay Jain:
I hit the same issue once. You’d need to specify the endpoints, cert and key files. Otherwise the command just sits there with an empty .part folder

Michael:
The howto is great, but the last point “After the manifest files are modified ETCD cluster should automatically restart” doesn’t work, the lab ends but the deploys never come back.

Michael:
I have to set this lines: - hostPath:
path: /var/lib/etcd-from-backup --data-dir=/var/lib/etcd-from-backup and - mountPath: /var/lib/etcd-from-backup after that i look with joutnaltctl -xe , but there is no automatic reboot in the lab. I need to run systemctl daemon-reload and
systemctl restart kubelet after that it runs immediatly.

Akshay Jain:
You would need to kill the existing etcd pods so that new ones get created automatically. Alternatively, with you did, the new pods are created when kubelet is alive again.