Hi everyone, I have the following scenario and I don’t know how to solve it.
I have an Argocd deployment on a legacy cluster, this instance is used by the development environment. On the other side I have a GKE cluster with a private control plane and I can access it via vpn and authenticate whit the gcloud command adding --dns-endpoint. So my .kube/config looks like this:
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args: null
command: gke-gcloud-auth-plugin
env: null
installHint: Install the gke-gcloud-auth-plugin for use with kubectl from
Install kubectl and configure cluster access | Google Kubernetes Engine (GKE) | Google Cloud
interactiveMode: IfAvailable
provideClusterInfo: true
This is because we need the gke-gcloud-auth-plugin installed.
When I try to add the cluster with the argocd cli, I get the following error:
INFO[0002] ServiceAccount “argocd-manager” already exists in namespace “kube-system”.
INFO[0002] ClusterRole “argocd-manager-role” updated
INFO[0003] ClusterRoleBinding “argocd-manager-role-binding” updated
FATA[0008] rpc error: code = Unauthenticated desc = The server has requested credentials from the client.
Why does this fail? I can run kubectl commands on my laptop and see the response from the GKE cluster.
I have modified the image of the argocd-repo-server to add a service account from google, but I see the same error.
Please if anyone can give me a hand with this I`ll apreciate it.
Hi @dmoronta
Did you provide the --kubeconfig
flag to the add cluster command?
PS: Here’s more about it.
Hi @Santosh_KodeKloud
The --kubeconfig parameter is used when the context configuration is not in the default path.
By the way, I tried to run the command with the --kubeconfig parameter and I have the same error.
When I try to add the cluster, do I have to be on the context where the argocd instance is deployed?
Thanks for your help!
Hi Dmoronta,
I think I understood your issue — here are a few tips that might help:
- When using the ArgoCD CLI to add a cluster (
argocd cluster add
), you don’t need to have your ArgoCD cluster context active. You only need your kubeconfig context set to the target GKE cluster you want to add.
- The CLI uses your local
gcloud
credentials from the kubeconfig to authenticate with the GKE cluster. Make sure the credentials (user or service account) have permissions to interact with the Kubernetes API. You don’t necessarily need a GCP IAM role like roles/container.viewer
for this step, unless you’re setting up authentication from inside the ArgoCD server later.
- The
argocd-repo-server
is not related to this issue — it’s responsible only for syncing with Git repositories. The argocd-server
and argocd-application-controller
are the components that interact with your Kubernetes clusters.
- The error you’re seeing —
rpc error: code = Unauthenticated desc = The server has requested credentials from the client
— is likely because the ArgoCD server (running in the legacy cluster) is trying to reach the GKE API but cannot authenticate. Since your GKE cluster uses private access and gke-gcloud-auth-plugin
, the ArgoCD server doesn’t have access to your local credentials and can’t perform exec-based authentication.
- In your case, I recommend switching to a declarative cluster setup using Workload Identity Federation. This allows ArgoCD to authenticate directly to the GKE cluster using a Google service account, without relying on your local environment. It’s the most secure and scalable approach.
You can follow the official docs here:
Declarative Setup - Argo CD - Declarative GitOps CD for Kubernetes