Getting only IPv6 for Kubernetes cluster with vagrant

I am using VBox version 6.1 and vagrant 2.4.3 on Linux Ubuntu 20.04 following the CKA guide for installing a kubernetes cluster using kubeadm (lecture 245 on Udemy).

I get only IPv6 for all three nodes. I am not sure how to use this when I reach the “kubeadm init” step.

Is there a way to get IPv4 instead?

Hi @a_yanni2006

It seems you’ve enabled IPv6 in the Vagrantfile.

Can you share your Vagrantfile?

Sure, it is the same as here certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox/Vagrantfile at master · kodekloudhub/certified-kubernetes-administrator-course · GitHub

I made no changes to it.

This Vagrabtfile gives you IPv4 IPs. starting from 192.168.56.20 for worker nodes and 192.168.56.11 for Controlplane/Master.

Your kubeadm init should provide pod-network-cidr range:
For example:

kubeadm init --pod-network-cidr=10.1.1.0/24 --apiserver-advertise-address 192.168.56.11

PS: You can follow this guide to work with this Vagrant file.

Yes, I understand it should give me IPv4 IPs, but I only get IPs similar to this:
enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:84:e7:1d brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:fe84:e71d/64 scope link
valid_lft forever preferred_lft forever

IPV6 might be enabled for this interface.
You can check if so by:
sysctl net.ipv6.conf.enp08s.disable_ipv6
If its set, i.e 0. Try disabling temporarily (until next reboot) by
sudo sysctl net.ipv6.conf.enp08s.disable_ipv6=1

Reload the sysctl -p And see if it helps.