Mumshad Mannambeth:
using the docker logs command
Gennway:
etcd :
Gennway:
2021-07-02 11:58:19.267343 I | etcdmain: etcd Version: 3.4.9
2021-07-02 11:58:19.267376 I | etcdmain: Git SHA: 54ba95891
2021-07-02 11:58:19.267383 I | etcdmain: Go Version: go1.12.17
2021-07-02 11:58:19.267385 I | etcdmain: Go OS/Arch: linux/amd64
2021-07-02 11:58:19.267388 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2021-07-02 11:58:19.267450 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-07-02 11:58:19.267483 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2021-07-02 11:58:19.268205 I | embed: name = controlplane
2021-07-02 11:58:19.268213 I | embed: data dir = /var/lib/etcd
2021-07-02 11:58:19.268216 I | embed: member dir = /var/lib/etcd/member
2021-07-02 11:58:19.268219 I | embed: heartbeat = 100ms
2021-07-02 11:58:19.268221 I | embed: election = 1000ms
2021-07-02 11:58:19.268223 I | embed: snapshot count = 10000
2021-07-02 11:58:19.268229 I | embed: advertise client URLs = <https://172.17.0.10:2379>
2021-07-02 11:58:19.268232 I | embed: initial advertise peer URLs = <https://172.17.0.10:2380>
2021-07-02 11:58:19.268236 I | embed: initial cluster =
2021-07-02 11:58:19.289436 I | etcdserver: restarting member f2872e5e711415ed in cluster 7f286a391c00da73 at commit index 3093
raft2021/07/02 11:58:19 INFO: f2872e5e711415ed switched to configuration voters=()
raft2021/07/02 11:58:19 INFO: f2872e5e711415ed became follower at term 2
raft2021/07/02 11:58:19 INFO: newRaft f2872e5e711415ed [peers: [], term: 2, commit: 3093, applied: 0, lastindex: 3093, lastterm: 2]
2021-07-02 11:58:19.291333 W | auth: simple token is not cryptographically signed
2021-07-02 11:58:19.292115 I | mvcc: restore compact to 1872
2021-07-02 11:58:19.296760 I | etcdserver: starting server... [version: 3.4.9, cluster version: to_be_decided]
raft2021/07/02 11:58:19 INFO: f2872e5e711415ed switched to configuration voters=(17475987862193444333)
2021-07-02 11:58:19.297607 I | etcdserver/membership: added member f2872e5e711415ed [<https://172.17.0.10:2380>] to cluster 7f286a391c00da73
2021-07-02 11:58:19.297788 N | etcdserver/membership: set the initial cluster version to 3.4
2021-07-02 11:58:19.297955 I | etcdserver/api: enabled capabilities for version 3.4
2021-07-02 11:58:19.299497 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2021-07-02 11:58:19.299724 I | embed: listening for metrics on <http://127.0.0.1:2381>
2021-07-02 11:58:19.300013 I | embed: listening for peers on 172.17.0.10:2380
raft2021/07/02 11:58:20 INFO: f2872e5e711415ed is starting a new election at term 2
raft2021/07/02 11:58:20 INFO: f2872e5e711415ed became candidate at term 3
raft2021/07/02 11:58:20 INFO: f2872e5e711415ed received MsgVoteResp from f2872e5e711415ed at term 3
raft2021/07/02 11:58:20 INFO: f2872e5e711415ed became leader at term 3
raft2021/07/02 11:58:20 INFO: raft.node: f2872e5e711415ed elected leader f2872e5e711415ed at term 3
2021-07-02 11:58:20.593598 I | etcdserver: published {Name:controlplane ClientURLs:[<https://172.17.0.10:2379>]} to cluster 7f286a391c00da73
2021-07-02 11:58:20.593979 I | embed: ready to serve client requests
2021-07-02 11:58:20.594316 I | embed: ready to serve client requests
2021-07-02 11:58:20.598206 I | embed: serving client requests on 127.0.0.1:2379
2021-07-02 11:58:20.858059 I | embed: serving client requests on 172.17.0.10:2379
2021-07-02 11:58:31.175035 I | etcdserver/api/etcdhttp: /health OK (status code 200)
Gennway:
from what I understand, api cannot connect to etcd
Gennway:
even if etcd health is ok
Mumshad Mannambeth:
Ok can you restart the api server then?
Mumshad Mannambeth:
just delete the API server container
Mumshad Mannambeth:
and it should auto-restart
Mumshad Mannambeth:
On a side note, the question for the mock exam is only to take a backup isn’t it? not to restore.
Gennway:
yeah it is to backup, but I though that I can make some extra task for myself
Gennway:
to check if I remember if I could restore the backup
Gennway:
restarted the API, still doesnt work :circlethinking:
Gennway:
maybe this cluster isnt adapted to restore the etcd
Gennway:
and I shouldnt worry about it, I was able to restore it in the lab that had task to restore
Esra:
yes, same here, I would practice restore but seems not working at lab.
Sanjay Kumar:
Yes I agree - tried the same scenario with in mock exam 2 - did the etcd backup and it worked fine then try to restore the same backup and none of the control plane components came up. Shared the error and whole lot of details on the Udemy q&a section as well.
Sanjay Kumar:
Also, tried the same scenario on one of cluster of killer shell and it worked fine there with no problems (just needs to use - - skip-hash-check) but same etcd restore command is not working in the CKA course labs.
Sanjay Kumar:
I had my CKA exam today and was was able today take backup without any issues but while restore had the same situation where the control plane components are not coming up or showing when running the command
Kubectl -n kube-system get all
controlplane $ k get pods -A
No resources found
So I guess We need to find out what is causing this behavior.
@Mumshad Mannambeth - I have posted the details of the same issue (step by step with logs for etcd pod) in exam 2 in Udemy course as well. Appreciate if you can check and suggest why this is happening ?