What namespace do you have the secret in?
in default namespace, with ingress i even tried to deploy it in cert-manager namespace and had same errors.
I would bounce godaddy-webhook after changing the secret. Also would you list:
kubectl get svc -n cert-manager
I deleted everything and started again, used clusterissuer and the godaddy-api-secret as you told me now i get these errors.
cert-manager:
E0110 14:50:24.9584661 controller.go:167] "cert-manager/challenges: re-queuing item due to error processing" err="[the server is currently unable to handle the request (post godaddy.acme.mycompany.com), when updating the status: Internal error occurred: failed calling webhook \"webhook.cert-manager.io\": failed to call webhook: Post \"https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s\": dial tcp 10.105.170.37:443: connect: connection refused]" key="default/adeiz-ca-1-3158728248-3752762942"
I0110 15:10:57.738928 1 dns.go:88] "cert-manager/challenges/Present: presenting DNS01 challenge for domain" resource_name="adeiz-ca-1-3158728248-3752762942" resource_namespace="default" resource_kind="Challenge" resource_version="v1" dnsName="testps.adeiz.com" type="DNS-01" resource_name="adeiz-ca-1-3158728248-3752762942" resource_namespace="default" resource_kind="Challenge" resource_version="v1" domain="testps.adeiz.com"
E0110 14:52:16.7823351 controller.go:167] "cert-manager/challenges: re-queuing item due to error processing" err="Unable to check the TXT record: ### Unexpected HTTP status: 403" key="default/adeiz-ca-1-3158728248-3752762942"
godaddy-webhook:
INFO[0117] ### URL request issued to check if the TXT DNS record is present: /v1/domains/adeiz.com/records/TXT/_acme-challenge.testps
403 is forbidden. Either the credential is in the wrong place or it is wrong. I had these issues and I copied it to the cert-manager namespace but it might have been that I needed to restart the webhook pod.
Run this and copy the output - lets make sure everything is running:
kubectl get svc -n cert-manager
i moved secret to cert-manager ns and same issue, here is the svc result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
acme-webhook-godaddy-webhook ClusterIP 10.96.28.200 <none> 443/TCP 76m
cert-manager ClusterIP 10.100.228.200 <none> 9402/TCP 98m
cert-manager-webhook ClusterIP 10.105.170.37 <none> 443/TCP 98m
Did you restart the pod? Sorry I should have asked for all not svc.
kubectl get all -n cert-manager
My service name for the webhook is different:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cert-manager ClusterIP 10.152.183.238 <none> 9402/TCP 19h
service/cert-manager-webhook ClusterIP 10.152.183.53 <none> 443/TCP 19h
service/godaddy-webhook ClusterIP 10.152.183.17 <none> 443/TCP 19h
yes i did restart them and i just repeted the steps, moved the secret to cert-manager and restart the pods as you can see 4m:
NAME READY STATUS RESTARTS AGE
pod/acme-webhook-godaddy-webhook-c59cb5569-cl4kq 1/1 Running 0 4m30s
pod/cert-manager-5d999567d7-hdrdr 1/1 Running 0 4m30s
pod/cert-manager-cainjector-5d755dcf56-qkcxq 1/1 Running 0 4m30s
pod/cert-manager-webhook-7f7b47c4d4-7mw58 1/1 Running 0 4m30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/acme-webhook-godaddy-webhook ClusterIP 10.96.28.200 <none> 443/TCP 81m
service/cert-manager ClusterIP 10.100.228.200 <none> 9402/TCP 104m
service/cert-manager-webhook ClusterIP 10.105.170.37 <none> 443/TCP 104m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/acme-webhook-godaddy-webhook 1/1 1 1 81m
deployment.apps/cert-manager 1/1 1 1 104m
deployment.apps/cert-manager-cainjector 1/1 1 1 104m
deployment.apps/cert-manager-webhook 1/1 1 1 104m
NAME DESIRED CURRENT READY AGE
replicaset.apps/acme-webhook-godaddy-webhook-c59cb5569 1 1 1 81m
replicaset.apps/cert-manager-5d999567d7 1 1 1 104m
replicaset.apps/cert-manager-cainjector-5d755dcf56 1 1 1 104m
replicaset.apps/cert-manager-webhook-7f7b47c4d4 1 1 1 104m
NAME READY STATUS RESTARTS AGE
pod/acme-webhook-godaddy-webhook-c59cb5569-cl4kq 1/1 Running 0 4m30s
pod/cert-manager-5d999567d7-hdrdr 1/1 Running 0 4m30s
pod/cert-manager-cainjector-5d755dcf56-qkcxq 1/1 Running 0 4m30s
pod/cert-manager-webhook-7f7b47c4d4-7mw58 1/1 Running 0 4m30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/acme-webhook-godaddy-webhook ClusterIP 10.96.28.200 <none> 443/TCP 81m
service/cert-manager ClusterIP 10.100.228.200 <none> 9402/TCP 104m
service/cert-manager-webhook ClusterIP 10.105.170.37 <none> 443/TCP 104m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/acme-webhook-godaddy-webhook 1/1 1 1 81m
deployment.apps/cert-manager 1/1 1 1 104m
deployment.apps/cert-manager-cainjector 1/1 1 1 104m
deployment.apps/cert-manager-webhook 1/1 1 1 104m
NAME DESIRED CURRENT READY AGE
replicaset.apps/acme-webhook-godaddy-webhook-c59cb5569 1 1 1 81m
replicaset.apps/cert-manager-5d999567d7 1 1 1 104m
replicaset.apps/cert-manager-cainjector-5d755dcf56 1 1 1 104m
replicaset.apps/cert-manager-webhook-7f7b47c4d4 1 1 1 104m
Yeah that all looks good. I am convinced it is your auth token not being picked up.
yea they are different cause i used this helm to install it , i don’t think that would cause an issue though
helm repo add godaddy-webhook https://snowdrop.github.io/godaddy-webhook
helm install acme-webhook godaddy-webhook/godaddy-webhook -n cert-manager
the secret was created from here : https://developer.godaddy.com/keys
using production option
Secret created like this:
apiVersion: v1
kind: Secret
metadata:
name: godaddy-api-key
#namespace: default
type: Opaque
stringData:
token: ABCDEFGH_IJKLM:NOPQRST
#ABCDEFGH_IJKLM --> api_key
#NOPQRST --> api_secret
I am not 100% the key goes in the default namespace… Try it in cert-manager
and kube-system
- I’m embarrassed to say I have it in all three.
Delete your cert request. Create a new cert request. Bounce the pods and it should work.
haha sadly it didn’t work even though i can hit the api using :
`curl -X GET -H "Authorization: sso-key $key:$secret" "https://api.godaddy.com/v1/domains/available?domain=adeiz.com"
result: {"available":false,"definitive":true,"domain":"adeiz.com"}
`
Yeah so your API key is working. It must be the key is not being picked up when it tries to validate the domain. You need to kill off the pods and let the deployment recreate them.
hey, i did that multiple times already, actually i do that after any update. but same issue and i’am not sure where is the bug.
If you are get 400 errors the the key is not being used or the key is wrong. I don’t think it is a bug.
yes i mean i don’t know whats wrong , i can use the api:secret using the curl, but i still get error 403
hello, hope you’re doing well, you are right the problem was related to the apikey:secret i didnt have the permissions for that domain. thanks alot.