Simple One Node multi Pod install on AWS EC2 private instance exposing to public - Thoughts please

Hello xperts !

I have a kubernetes cluster on a private ec2 instance.

The Cluster is not provisioned by a cloud provider (e.g EKS etc) but installed like a bare metal which means there are no External IPs assigned and no Load Balancer supported.

How do we expose kubernetes services without external ip on this private ec2 instance to the internet ? Direct routing to the private ec2 instance is not allowed via dns route 53 as per security rules (this works).

Trying out few things like ALB ( not working) and ReverseProxy ( just started trying this).
[ hate to take the path like Metallb] .

Any recommendations and/or if anyone has tackled something like above and keep it simple ?

-Thanks
SA

When you say “kubernetes services”, you mean a workload that’s exposed via a service?
Do you have an ingress controller (such as ingress-nginx) installed on your roll-it-yourself cluster? If the IC is configured via a LoadBalancer service (which requires some special annotations for AWS) then you expose the workloads via ingress resources. Here’s a (rather old) blog post of someone who did this; you’d need to check the AWS documentation to make sure that the annotations are still valid.

Yes exposing the service . Thanks for the quick response , will check and update further.
Thx