Sandeep Goyal:
Hi Everyone
I use the below command for etcd backup and restore on a local kubernetes instance(1 master and 2 workers), setup using vagrant files provides in the caurse
sudo sudo ETCDCTL_API=3 etcdctl --endpoints=<https://127.0.0.1:2379> --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" snapshot save snapshotdb1
sudo ETCDCTL_API=3 etcdctl --endpoints=<https://127.0.0.1:2379> --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" --data-dir=/var/lib/etcd-backup/ --initial-advertise-peer-urls="<https://127.0.0.1:2380>" --initial-cluster="kubemaster=<https://127.0.0.1:2380>" --initial-cluster-token="etcd-cluster1" --name="kubemaster" snapshot restore snapshotdb1
Post restoration of the backup I cannot see the pods and service that were running and can only see the default Cluster-IP service.
I also noticed that kubectl get nodes doesn’t show any nodes, until I restart the kubelet service on each of the nodes and kube-proxy and weavent pods dies after that.
Kindly guide, what I am doing wrong…