I was trying to see how to access the service by its name instead of the clusterIP on the controlplane and here its a single node cluster. But it doesn’t seem to be working. So to be in a cluster means launching a new pod and accessing it? Can’t we have the access to k8s cluster by logging to any of its master or node servers? I could see kube-dns is up and running. Can someone help me understand why ip address alone is working?
root@controlplane:~# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
httpd 1/1 Running 0 31s k=v,run=httpd
root@controlplane:~# kubectl expose pod httpd --name=svc-1 --port=80
service/svc-1 exposed
root@controlplane:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 21m
svc-1 ClusterIP 10.97.241.64 80/TCP 12m
root@controlplane:~# curl 10.97.241.64:80
It works!
root@controlplane:~# curl svc-1.default.svc.cluster.local:80
curl: (6) Could not resolve host: svc-1.default.svc.cluster.local
root@controlplane:~# curl svc-1:80
curl: (6) Could not resolve host: svc-1
root@controlplane:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 21m