Not able to join node to Master

I am trying to set up Kubernetes on AWS. I took two instances one for master and one for slave.
Master and slave node i installed all the dependencies. After kubeadm init I got the token and run the command on my slave instance but every time it gives a connection refused error.

error execution phase preflight: couldn't validate the identity of the API Server: Get "https://172.31.1.246:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp 172.31.1.246:6443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher

I allow the port on the AWS security group but still I am getting this error. Please help me

HI @iftekharjoy1,

Thank you for your question, it’s an interesting one apparently the slave can’t access to API server on port 6443.

If the Api-server is up and running correctly on ControlPlane, you need to

  • check the firewall by default on ubuntu is not active
  • as you are on AWS check the security group linked to your instance by default AWS authorize only ssh port, you can authorize all ports like the screenshot below :

but you need to have a firewall policy. The link below can help you with which port you need to authorize

Regards

Thank you @mmkmou . I allow the port only 6443. Let me check the firewall status and get back to you again.

1 Like

Thanks a lot @mmkmou . Now, my api server connect to slave. But when I execute this command I am getting this error:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/tigera-operator.yaml

The connection to the server 172.31.85.253:6443 was refused - did you specify the right host or port?

I am trying to install caclico on my master node because my master isn’t ready.

172.31.85.253 is for which instance ??

master node - private ip

What’s the rule apply on your firewall ??
also from where did you run the kubectl command inside master node ??

sudo ufw allow***ufw allow 6443/udp && ufw allow 6443/tcp***

i run this command inside the master node. I tried everything. Is it for low resources ?/

This is the log I am getting
ubuntu@ip-172-31-85-253:~$ journalctl -xeu kubelet
Aug 27 18:09:12 ip-172-31-85-253 kubelet[4647]: E0827 18:09:12.229505 4647 pod_workers.go:965] “Error syncing pod, skipping” err="failed to "StartContainer" for "kube>
Aug 27 18:09:12 ip-172-31-85-253 kubelet[4647]: I0827 18:09:12.692095 4647 scope.go:115] “RemoveContainer” containerID="118847da5d3ac1fd51528636b9349209ff7403fae0a9fb8d3>
Aug 27 18:09:12 ip-172-31-85-253 kubelet[4647]: E0827 18:09:12.692835 4647 pod_workers.go:965] “Error syncing pod, skipping” err="failed to "StartContainer" for "cali>
Aug 27 18:09:12 ip-172-31-85-253 kubelet[4647]: I0827 18:09:12.763180 4647 scope.go:115] “RemoveContainer” containerID="118847da5d3ac1fd51528636b9349209ff7403fae0a9fb8d3>
Aug 27 18:09:12 ip-17…

hi can you please give me the output of

kubectl get pods -n kube-system-o wide 

Also did you install any CRI (docker - containerd - CRI - …) on worker node ?

hi sir what is CRI and it is related to v=5 problem