[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION m . . .

Alistair Mackay:
You can’t make a bridge to the entire pod network
If you want to access the pod from your machine, the simplest way is to create a node port service. Here I’m assuming that nignx is listening on port 80

kubectl expose pod nginx --name=nginx-svc --type=NodePort --port=80 --target-port=80 --dry-run=client -o yaml > svc.yaml
vi svc.yaml
# Edit the yaml file and add nodePort: 30080 alongside port and target port
kubectl create -f svc.yaml

Now your service should be running, and should be able to be connected to via <http://worker2:30080>

Alistair Mackay:
Note that node ports must be between 30000 and 32000
Node port should not be used in production scenarios

Thiru:
@Alistair Mackay already i did above and i can reach the pod outside only by using worker2ip:nodeport …not by masterip:nodeport

unnivkn:
Hi @Thiru Do you have any pod instance running on master node? master node is not meant for handling worker loads. fyr:

Thiru:

Don't have any pods in Master node.. Not untainted.. @unnivkn .. 


If the application example Nginx is in inside container  on workernode2  .. To reach it outside  I have to  use nodeport svc...

1..It means i can access the nginx container outside only by using the workernode2:nodeport 
is it Correct.......

2..But i heard it somewhere like I could access the particular application by any node ip with nodeport( belongs to the particular application).. 

Can you solve my misunderstanding on above @unnivkn

Thiru:
@unnivkn kindly go through the above and make me clarify …

Alistair Mackay:
The node port service once created successfully should be accessible at all workers in the cluster.
It might not be exposed at master/controlplane nodes. That would be for cluster security.

You should only use nodeport for testing. For production use, should be ingress.
https://kubernetes.io/docs/concepts/services-networking/ingress/

unnivkn:

unnivkn:
https://stackoverflow.com/questions/66471581/does-kubernetes-service-of-type-nodeport-works-with-worker-ip-or-control-planem

unnivkn:
Hi @Thiru your question is well explained here. Please have a look.





Thiru:
i can only access the container by using ip which worker node its belongs to…Not by controlnode ip or others even i exposed through nodeport…

Thiru:
Kindly go through the document i attached …Make me clear in this topic… @unnivkn @Alistair Mackay

Thiru:
I think regardless of nodeip ,nodeport need to bring the data from the cluster ip and need to expose outside…

Thiru:
i mean if the service belongs to the particular pod we could reach the pod with clusterip:port in locally … nodeport and clusterip belongs to service object … so if i give the nodeport with any one of the cluster nodeip ,nodeport need to contact the cluster ip and clusterip will make a way to reach the container in a pod as nodeport wants …so here nodeip not a problem nodeport plays major role… Am i understand correct @unnivkn …If my understanding is correct why i am facing issue above…??

Thiru:
kindly share your thoughts @unnivkn

unnivkn:
Did you tried this from browser?
image.png

unnivkn:
Delete this pod, SVC & try to schedule the pod on worker1 & master node one by one & see how it is responding.

Thiru:
i had been tried in browser also but same result only came .I placed some other pod in worker1 likewise i can only acess the pod outside by worker1ip:nodeport …

unnivkn:
Hmmm… looks like some thing wrong with your setup or firewall setting etc…

Thiru:
Thanks @unnivkn…issue is not solved but i am cleared all my doubts related to service …from outside to inside contact–> internally(inside cluster) svc will only use clusterip to reach the container inside a pod…