Setting up K8s cluster

I tried to setup K8s cluster using vagrant scripts for ubuntu from GitHub - kodekloudhub/certified-kubernetes-administrator-course: Certified Kubernetes Administrator - CKA Course

The kube-proxy and kube-flannel pods are always using my host ip - is this expected?. I understand it should use pod network ip

Issue:

  • I have an web app and trying to access it via controlplane ip or other node02 ip it’s not accessible.
  • I could only access the app via the node02 ip where the app pod is running.
    I have not replicated it across the nodes.

Here is the IPs for each pod (I removed the app pod to check the setup)
I did the kubeadm reset and it never helped

vagrant@controlplane:~$ kubectl get pods -A -o wide
NAMESPACE      NAME                                   READY   STATUS    RESTARTS        AGE     IP              NODE           NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-bh5bs                  1/1     Running   0               2m7s    192.168.68.57   controlplane   <none>           <none>
kube-system    coredns-674b8bbfcf-6jjt8               1/1     Running   0               6m47s   10.244.0.10     controlplane   <none>           <none>
kube-system    coredns-674b8bbfcf-c5jfx               1/1     Running   0               6m47s   10.244.0.11     controlplane   <none>           <none>
kube-system    etcd-controlplane                      1/1     Running   1 (12m ago)     6m54s   192.168.68.57   controlplane   <none>           <none>
kube-system    kube-apiserver-controlplane            1/1     Running   1 (7m19s ago)   6m54s   192.168.68.57   controlplane   <none>           <none>
kube-system    kube-controller-manager-controlplane   1/1     Running   4 (12m ago)     6m54s   192.168.68.57   controlplane   <none>           <none>
kube-system    kube-proxy-42gvn                       1/1     Running   0               6m47s   192.168.68.57   controlplane   <none>           <none>
kube-system    kube-scheduler-controlplane            1/1     Running   4 (12m ago)     6m54s   192.168.68.57   controlplane   <none>           <none>

What kind of services is your app using? It would need to be a NodePort service, and you’d need to use the IP of one of the virtual systems. This assumes that you’re using the BRIDGE mode of the vagrant script, so that the network of the virtuals is accessible to your host system.

Thank you @rob_kodekloud
App is a simple Rest API (test api like “hello world”) and service is configured to use NodePort

  • Cluster : controlplane , node01 and node02
  • Able to access the api using node01(ip) - REST API pod is running only in this node
  • Unale to access the api using controlplane (ip) or node02 (ip)

I tried to find solutions, AI / google says the issue is in basic K8s setup. That the kube-proxy and kube-flannel should not use host IP - Is this correct? If yes what is the fix?

You may not be getting the best info from the AI here :slight_smile: But assuming that the Vagrantfile is configured to use the BRIDGE mode (as you see here),

# Set the build mode
# "BRIDGE" - Places VMs on your local network so cluster can be accessed from browser.
#            You must have enough spare IPs on your network for the cluster nodes.
# "NAT"    - Places VMs in a private virtual network. Cluster cannot be accessed
#            without setting up a port forwarding rule for every NodePort exposed.
#            Use this mode if for some reason BRIDGE doesn't work for you.
BUILD_MODE = "BRIDGE"

then you should be able to access the app using the IP one of the virtuals at the port indicated by looking at the NodePort service; you can see this by doing kubectl get svc SERVICENAME -o wide.