I restored etcd snapshot as per the instructions mentioned in the kubernetes document

Hi, I get following error post restoring etcd snapshot, what might have went wrong. i can see etcd is restored successfully from the log and also etcd is in running state. it says the admin user have no privileges

etcd restore logs
Deprecated: Use etcdutl snapshot restore instead.

2025-08-18T13:42:51Z info snapshot/v3_snapshot.go:251 restoring snapshot {“path”: “/opt/cluster_backup.db”, “wal-dir”: “/root/default.etcd/member/wal”, “data-dir”: “/root/default.etcd”, “snap-dir”: “/root/default.etcd/member/snap”, “stack”: “go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225”}
2025-08-18T13:42:51Z info membership/store.go:119 Trimming membership information from the backend…
2025-08-18T13:42:51Z info membership/cluster.go:393 added member {“cluster-id”: “cdf818194e3a8c32”, “local-member-id”: “0”, “added-peer-id”: “8e9e05c52164694d”, “added-peer-peer-urls”: [“http://localhost:2380”]}
2025-08-18T13:42:51Z info snapshot/v3_snapshot.go:272 restored snapshot {“path”: “/opt/cluster_backup.db”, “wal-dir”: “/root/default.etcd/member/wal”, “data-dir”: “/root/default.etcd”, “snap-dir”: “/root/default.etcd/member/snap”}

Error:
Error from server (Forbidden): nodes is forbidden: User “kubernetes-admin” cannot list resource “nodes” in API group “” at the cluster scope

This is probably a caching related problem. kube-apiserver caches state it pulls from etcd, and if it’s running during the restore process, you can get weird states like this.

The best fix I’ve found is to make sure that kube-apiserver is NOT running during the restore process. You want to do the process in roughly this order:

  • Restore the etcd archive to a new directory.
  • Take kube-apiserver down; in a kubeadm install, this means moving the kube-apiserver.yaml file out of the /etc/kubernetes/manifests directory and restarting kubelet on that node.
  • Now modify /etc/kubernetes/manifests/etcd.yaml as needed for the new data-dir.
  • Restart kubelet and wait for etcd to come back up.
  • Move kube-apiserver.yaml back into /etc/kubernetes/manifests and restart kubelet.