Prometheus on same Kubernetes cluster

In your introduction video on Kubernetes with Prometheus, you mentioned that the Prometheus server needs to be deployed within the same Kubernetes cluster. However, if the entire cluster goes down, Prometheus will also go down, making monitoring impossible. In a real-world scenario, how is this typically configured to ensure monitoring remains available?

Hi priyanshd2510,

You’re absolutely right! In a real-world scenario, relying on a Prometheus server inside the same Kubernetes cluster means losing monitoring if the cluster goes down.

To address this, we only need to install the Node Exporter on the Kubernetes nodes—either as a containerized pod or using systemctl on the host machine. These Node Exporters send metrics to a Prometheus server that can be deployed outside the Kubernetes cluster, ensuring monitoring remains available even if the cluster fails.

For better performance and security, we should set up a private connection between the Node Exporters and the external Prometheus server to reduce latency and improve reliability.

As for Grafana, it can be deployed anywhere, as long as it has access to Prometheus to fetch metrics.

In most learning environments, demos, and small projects, we typically deploy Prometheus inside the Kubernetes cluster for simplicity. However, for production systems, running Prometheus externally or in a highly available setup ensures continuous monitoring even during cluster failures.

1 Like

In video they mentioned that we have to deploy Prometheus to as close to the target so can you provide some info what it mean and how we can ensue in real-world scenario.
Thanks