Satyam Sareen:
Hi Everyone,
I have set up a cluster using kubeadm using 1 master and 2 worker nodes ( A total of 3 ubuntu ec2 servers in AWS)
Nginx Pods on the same node are able to communicate with each other i.e curl command works
But when I am trying to curl pods on the other nodes (another ec2 machine) , the curl command fails,
curl: (7) Failed to connect to 10.32.0.2 port 80: Connection refused
curl: (7) Failed to connect to 10.36.0.0 port 80: No route to host
Can someone pls help here?
Thank you
Phani M:
Node to node communication responsibility is by kube-proxy. Make sure the process is running on every node.
Second thing to check is if there are any iptables blocking the ingress/egress traffic on the nodes
Are the nodes are on the same VPC?
Please make sure in the Security Groups of each VM, allow all traffic for your internal LAN network, usually the CIDR is something like 172.31.0.0/16
Nitin:
Hello @Phani M thanks for your inputs … but dont you think coredns service is also responsible to resolve the DNS/IP of other nodes?
Basavraj Nilkanthe:
If either of them fail… You cant access it
Satyam Sareen:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-558bd4d5db-ppnq8 1/1 Running 1 26h 10.32.0.3 master <none> <none>
coredns-558bd4d5db-pwkt5 1/1 Running 1 26h 10.32.0.2 master <none> <none>
etcd-master 1/1 Running 1 26h 172.31.30.108 master <none> <none>
kube-apiserver-master 1/1 Running 1 26h 172.31.30.108 master <none> <none>
kube-controller-manager-master 1/1 Running 1 26h 172.31.30.108 master <none> <none>
kube-proxy-95cvq 1/1 Running 1 26h 172.31.30.108 master <none> <none>
kube-proxy-9c4sw 1/1 Running 2 25h 172.31.30.32 worker02 <none> <none>
kube-proxy-s88kv 1/1 Running 1 25h 172.31.26.86 worker01 <none> <none>
kube-scheduler-master 1/1 Running 1 26h 172.31.30.108 master <none> <none>
weave-net-2599n 2/2 Running 3 26h 172.31.30.108 master <none> <none>
weave-net-6ggsn 2/2 Running 5 25h 172.31.30.32 worker02 <none> <none>
weave-net-tqv9z 2/2 Running 3 25h 172.31.26.86 worker01 <none> <none>
kube-proxy and core-dns service seem to work, All the 3 machine use the same security group and I have whitelisted port 80 there, but curl gives below error:
curl: (7) Failed to connect to 10.36.0.0 port 80: No route to host
Also, when I ran kubeadm init, I had given pod cidr as 10.244.0.0/16
but, the pods are being assigned address as below, which is again confusing me.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bb 1/1 Running 2 23h 10.36.0.1 worker02 <none> <none>
ng3 1/1 Running 1 23h 10.32.0.3 worker01 <none> <none>
ng4 1/1 Running 1 23h 10.32.0.4 worker01 <none> <none>
ng5 1/1 Running 1 23h 10.36.0.2 worker02 <none> <none>
nginx 1/1 Running 1 23h 10.36.0.0 worker02 <none> <none>
nginx2 1/1 Running 1 24h 10.32.0.2 worker01 <none> <none>
cc @Mumshad Mannambeth @Tej_Singh_Rana can you pls help here, thankyou