Enp0s3 and enp0s8 addresses when setting up the VMs with vagrant

Hi everyone!
I have set up 3 VMs with Vagrant file: certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox/Vagrantfile at master · kodekloudhub/certified-kubernetes-administrator-course · GitHub
I tried setting it up with both BRIDGE and NAT mode.
In both cases all three VMs have the same enp0s3 adddress: 10.0.2.15/24
Question: is it fine? Shouldn’t they differ?

Here is how ip a looks in BRIDGE mode:

vagrant@controlplane:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:41:72:fd:61:6c brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 86163sec preferred_lft 86163sec
    inet6 fe80::41:72ff:fefd:616c/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:03:4b:c9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.11/24 brd 192.168.56.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe03:4bc9/64 scope link
       valid_lft forever preferred_lft forever
vagrant@node01:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:41:72:fd:61:6c brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 86249sec preferred_lft 86249sec
    inet6 fe80::41:72ff:fefd:616c/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:e6:20:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.21/24 brd 192.168.56.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fee6:205a/64 scope link
       valid_lft forever preferred_lft forever
vagrant@node02:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:41:72:fd:61:6c brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 86341sec preferred_lft 86341sec
    inet6 fe80::41:72ff:fefd:616c/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:73:5f:e0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.22/24 brd 192.168.56.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe73:5fe0/64 scope link
       valid_lft forever preferred_lft forever

You repeated controlplane (the systems are controlplane, node01 and node02), but I don’t see a problem. enp0s3 seems to be used as a NAT’ed interface, and is used for non-cluster outgoing traffic.

The enp0s8 interfaces look to be “owned” by our custom networking script; those are the IP addresses used when you set the type of cluster as “NAT” (vs. “BRIDGE”). They look fine for that, and the cluster will work well, although it will not be bridged. The cluster traffic will go out on the *s8 interfaces.

Hi Rob!

I’ve corrected the output for node01 now.
If you say configuration is fine, i proceed to the cluster installation.

Thanks for your prompt reply!