How did the access to cluster IP worked outside of cluster (from control node IP)

root@controlplane ~ ➜  ip a | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
    inet 192.168.121.6/24 metric 100 brd 192.168.121.255 scope global dynamic eth0
    inet6 fe80::8e7:bcff:fed6:349c/64 scope link 
    inet 10.50.0.1/16 brd 10.50.255.255 scope global weave
    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave
    inet6 fe80::e0e7:40ff:fece:3054/64 scope link 
    inet6 fe80::d0f1:79ff:feab:2837/64 scope link 
    inet6 fe80::20c7:93ff:fed0:ff5f/64 scope link 
    inet6 fe80::ec7d:a5ff:fec7:b564/64 scope link 
    inet6 fe80::10f0:89ff:fe1f:5929/64 scope link 
    inet6 fe80::3c87:a4ff:fe3c:f4c3/64 scope link 
    inet6 fe80::d8e2:ecff:fee7:32b6/64 scope link 
    inet6 fe80::c4ac:63ff:fe8c:fce7/64 scope link 

root@controlplane ~ ➜  kubectl -n triton get svc,po,ep -o wide
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE   SELECTOR
service/mysql         ClusterIP   10.109.38.86     <none>        3306/TCP         41m   name=mysql
service/web-service   NodePort    10.101.102.231   <none>        8080:30081/TCP   41m   name=webapp-mysql

NAME                                READY   STATUS    RESTARTS   AGE   IP          NODE           NOMINATED NODE   READINESS GATES
pod/mysql                           1/1     Running   0          41m   10.32.0.3   controlplane   <none>           <none>
pod/webapp-mysql-689d7dc744-k8fs6   1/1     Running   0          41m   10.32.0.2   controlplane   <none>           <none>

NAME                    ENDPOINTS        AGE
endpoints/mysql         10.32.0.3:3306   41m
endpoints/web-service   10.32.0.2:8080   41m

root@controlplane ~ ➜  nc -vz 10.101.102.231 8080
Connection to 10.101.102.231 8080 port [tcp/http-alt] succeeded!

ClusterIP addresses are not in general accessible outside of a cluster; the pod-to-pod network set up by a CNI plugin does not usually allow any access except between pods. This is by design; it’s more secure that way.

If you want to access a pod from outside of a cluster, you need a NodePort or LoadBalancer service, which is designed for that, since nodes are in a separate network (even if it’s usually also behind a firewall), and load balancers are designed to be made accessible from outside the cluster entirely.

Here I think I was was able to access directly from control node IP 192.168.121.6. Curious to know how it worked? :slight_smile:

Take a look at the networking for the node, which allows it, partly because of how the CNI is configured. You should look at this as a security hole that would normally not be opened in this fashion. The pod-network should not be exposed to the outside world. Period.