yashwanth:
Hello team, we have a requirement that one of the pods needs to access the camera that is in on-prem network. our GKE cluster is in GCP network. we have established the vpn connection between two networks. but to allow the access, our IT team need to whitelist the IP of server that wants to access the camera. But in k8s ip of pods are dynamic and ever-changing. what is the solution for this ?
Alistair Mackay:
They are not going to be able to whitelist a single IP due to the nature of kubernetes as you have pointed out. I don’t really know GKE, but what I think you would need to do is to create a dedicated node pool using small machine types and constrain the node pool to known dedicated subnets which you have created for the pool.
Only permit the service that needs the camera access to launch in this node pool (taints/affinity etc). Connect the VPN to the node pool’s subnets and whitelist the subnets.
What you won’t be able to do is to make the pod come up on the same node every time that has a fixed IP. If the cluster is built properly for resilience, then the nodes change (cluster scaling, unhealthy nodes) as well as the pods.
yashwanth:
we can whitelist the new node-pool subnet range, but pod ip range will be completely different right. will it work
Alistair Mackay:
The pod will ultimately egress via its node’s network interface. Recall that kube-proxy sets up NAT rules between pod network and node network. From outside the subnet that the nodes are on, traffic should appear to come from the node IPs.
Alistair Mackay:
You can also experiment with hostNetwork: true
in the pod definition, meaning that the pod will attach directly to the node and not the pod network. This is fine if it is doing egress traffic only.
yashwanth:
If we deploy pod to host network, can we continue to use clusterIP service to communicate to other pods