Hi Team, I have installed a K8 cluster on AWS with 2 nodes currently. One proble . . .

Karim:
Hi Team, I have installed a K8 cluster on AWS with 2 nodes currently. One problem I am having is that I am using Weave which haas a range of 10.32.0.0/12 however the pods are getting an IP address of 172.17.0.2 as shown below. Any idea where I am going wrong?

weave status

        Version: git-34de0b10a69c (up to date; next check at 2022/10/08 09:19:57)

        Service: router
       Protocol: weave 1..2
           Name: 46:91:35:71:43:80(ip-10-10-10-153)
     Encryption: disabled
  PeerDiscovery: enabled
        Targets: 0
    Connections: 1 (1 established)
          Peers: 2 (with 2 established connections)
 TrustedSubnets: none

        Service: ipam
         Status: ready
          Range: 10.32.0.0/12
  DefaultSubnet: 10.32.0.0/12

root@ip-10-10-10-153:/etc/kubernetes/manifests# kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          12m   172.17.0.2   ip-10-10-10-223   <none>           <none>

Aneek Bera:
one is a public IP, other is private IP

Karim:
Thanks @Aneek Bera where is it coming from and why i cant my private IP when i describe the pod?

Karim:
In addition to that, I am not sure why some pods have the node address and the others have the 172.17.0.x which seems to be using the docker interface.

ubuntu@ip-10-10-10-153:/etc/kubernetes/manifests$ kubectl get pods -A -o wide
NAMESPACE     NAME                                      READY   STATUS    RESTARTS      AGE   IP             NODE              NOMINATED NODE   READINESS GATES
default       nginx                                     1/1     Running   0             18m   172.17.0.2     ip-10-10-10-223   <none>           <none>
kube-system   coredns-565d847f94-8qmbp                  1/1     Running   0             24m   172.17.0.2     ip-10-10-10-153   <none>           <none>
kube-system   coredns-565d847f94-wprgn                  1/1     Running   0             24m   172.17.0.3     ip-10-10-10-153   <none>           <none>
kube-system   etcd-ip-10-10-10-153                      1/1     Running   0             24m   10.10.10.153   ip-10-10-10-153   <none>           <none>
kube-system   kube-apiserver-ip-10-10-10-153            1/1     Running   0             24m   10.10.10.153   ip-10-10-10-153   <none>           <none>
kube-system   kube-controller-manager-ip-10-10-10-153   1/1     Running   0             24m   10.10.10.153   ip-10-10-10-153   <none>           <none>
kube-system   kube-proxy-9cnhx                          1/1     Running   0             24m   10.10.10.153   ip-10-10-10-153   <none>           <none>
kube-system   kube-proxy-kxngm                          1/1     Running   0             12m   10.10.10.223   ip-10-10-10-223   <none>           <none>
kube-system   kube-scheduler-ip-10-10-10-153            1/1     Running   0             24m   10.10.10.153   ip-10-10-10-153   <none>           <none>
kube-system   weave-net-j27vc                           2/2     Running   1 (21m ago)   22m   10.10.10.153   ip-10-10-10-153   <none>           <none>
kube-system   weave-net-x8pgc                           2/2     Running   0             12m   10.10.10.223   ip-10-10-10-223   <none>           <none>
ubuntu@ip-10-10-10-153:/etc/kubernetes/manifests$ ifconfig
datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
        inet6 fe80::1c3e:81ff:fee2:194c  prefixlen 64  scopeid 0x20<link>
        ether 1e:3e:81:e2:19:4c  txqueuelen 1000  (Ethernet)
        RX packets 31  bytes 1976 (1.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15  bytes 1146 (1.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:b7ff:fe13:4885  prefixlen 64  scopeid 0x20<link>
        ether 02:42:b7:13:48:85  txqueuelen 0  (Ethernet)
        RX packets 4264  bytes 351688 (351.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4568  bytes 470872 (470.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
        inet 10.10.10.153  netmask 255.255.255.0  broadcast 10.10.10.255
        inet6 fe80::a0:45ff:fede:20c5  prefixlen 64  scopeid 0x20<link>
        ether 02:a0:45:de:20:c5  txqueuelen 1000  (Ethernet)
        RX packets 77427  bytes 92857824 (92.8 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20423  bytes 5383917 (5.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 417720  bytes 66457867 (66.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 417720  bytes 66457867 (66.4 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethc9b09d7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::44e:40ff:fea4:9182  prefixlen 64  scopeid 0x20<link>
        ether 06:4e:40:a4:91:82  txqueuelen 0  (Ethernet)
        RX packets 1677  bytes 160946 (160.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1823  bytes 181297 (181.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethcad1f81: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::7403:c1ff:fed9:4767  prefixlen 64  scopeid 0x20<link>
        ether 76:03:c1:d9:47:67  txqueuelen 0  (Ethernet)
        RX packets 1677  bytes 161108 (161.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1818  bytes 180545 (180.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethwe-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
        inet6 fe80::cf2:dfff:fe33:3af7  prefixlen 64  scopeid 0x20<link>
        ether 0e:f2:df:33:3a:f7  txqueuelen 0  (Ethernet)
        RX packets 33  bytes 2626 (2.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18  bytes 1332 (1.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethwe-datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
        inet6 fe80::a4a0:a6ff:fedc:e4da  prefixlen 64  scopeid 0x20<link>
        ether a6:a0:a6:dc:e4:da  txqueuelen 0  (Ethernet)
        RX packets 18  bytes 1332 (1.3 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 33  bytes 2626 (2.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vxlan-6784: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 65535
        inet6 fe80::4892:2fff:fea0:cbf5  prefixlen 64  scopeid 0x20<link>
        ether 4a:92:2f:a0:cb:f5  txqueuelen 1000  (Ethernet)
        RX packets 132  bytes 178992 (178.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 131  bytes 177616 (177.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
        inet 10.32.0.1  netmask 255.240.0.0  broadcast 10.47.255.255
        inet6 fe80::4491:35ff:fe71:4380  prefixlen 64  scopeid 0x20<link>
        ether 46:91:35:71:43:80  txqueuelen 1000  (Ethernet)
        RX packets 32  bytes 2088 (2.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 886 (886.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Alistair Mackay:
Firstly, how have you installed the cluster? Is it “hard way” - looks like it

It’s perfectly normal for some pods to use the host interface. You should find if you inspect the pod YAML, that it has

hostNetwork: true

This is because certain pods need to interact with the external network such as kube-proxy and CNI providers (Weave). Note also that when building a non-kubedam cluster, then all Admission Webhooks need to be deployed with hostNetwork: true or the API server (which is running on the host network) can’t see them,

As for provisioning the cluster internal network (pods, services), it is advisable to use the CIDR ranges found in the kubernetes the hard way documentation, and it should play nicely with Weave.

Docker should not be installed if building a v1.24 cluster. You instead use crictl to interact with containers directly, and containerd/runc/CNI plugins for container management by kubelet.

Karim:
Thank You @Alistair Mackay , this is actually using kubeadm. The guide here mentions installing a container runtime thus I installed docker + cri-dockerd. Can you provide more insignt on to the admission webhooks and does your statement remain valid with kubeadm? https://v1-24.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/|https://v1-24.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Alistair Mackay:
@Karim Here’s my guide on setting up a kubeadm cluster on virtual machines with Weave. No reason it shouldn’t work on EC2 instances.

https://github.com/fireflycons/kubeadm-on-ubuntu-jammy

Karim:
Thank You!!

Alistair Mackay:
Ignore “boot time setup” if you already got a cluster running in EC2

Karim:
Will keep you posted. Thanks for all the help

unnivkn:
Hi @Karim please go through this doc:
https://www.golinuxcloud.com/setup-kubernetes-cluster-on-aws-ec2/
Note: If you are installing k8s v1.24 or above, please not use Docker as Container runtime engine.