Firstly, I needed AI help to figure out what happened to API server connectivity from inside the cluster after restoring etcd - needed to delete all kube-proxy so that it recreates iptables rules correctly.
But notes on the ambiguity of the tasks:
Task 7
Identify the CPU and memory resource capacity on
cluster2-node01node and save the results in/root/cluster2-node01-cpu.txtand/root/cluster2-node01-memory.txt, respectively, on thecluster2-controlplane.Store the values in the following format:
<Resource-name>: <Value>
The question is ambiguous - should resource-name be cpu or node name? We are splitting all four (two metrics, two nodes) between four files so there’s no logic behind this to guess.
Task 13
A pod called
elastic-app-cka02-archis running in thedefaultnamespace. TheYAMLfile for this pod is available at/root/elastic-app-cka02-arch.yamlon thecluster3-controlplane. The single application container in this pod writes logs to the file/var/log/elastic-app.log.One of our logging mechanisms needs to read these logs to send them to an upstream logging server, but we don’t want to increase the read overhead for our main application container. So, you need to
recreatethis POD with an additional co-located container namedbusyboxthat will run along with the application container and print to theSTDOUTby running the commandtail -f /var/log/elastic-app.log. You can use thebusyboximage for this container.
I did this using initContainers sidecar and added a short sleep at the beginning - following best practices. And even though the check for the pod is green, the check for the YAML is not. Surely, without the sleep (or creating the file in advance) the initContainer crashes prematurely but in real scenario we should use sidecar/init so that we don’t miss any logs of the main container.