Sandy Mauriz:
Hi on the ‘Troubleshooting section’ the test ‘Practice Test -Troubleshoot Network’ got some issues on question 2:
After changing the kube-proxy daemonset from: /var/lib/kube-proxy/config to /var/lib/kube-proxy/kubeconfig.conf
I get a different error from the logs:
1 server.go:490] failed complete: failed to decode: no kind “Config” is registered for version “v1” in scheme “http://k8s.io/kubernetes/pkg/proxy/apis/config/scheme/scheme.go:29|k8s.io/kubernetes/pkg/proxy/apis/config/scheme/scheme.go:29 ”
And when going to question 3 the issue still doesn’t get resolved so I cannot do it either.
Could you please help, thanks
Tej_Singh_Rana:
Hello, @Sandy Mauriz @Vic
Please check again kube-proxy configMap. Correct file name is config.conf
instead of kubeconfig.conf
.
Mohamed Ayman:
Check this solution:
Sandy Mauriz:
Hi @Mohamed Ayman @Tej_Singh_Rana thanks for that, but still after modifying the ds the kube-proxy is still on error but a different one. The new error it says ‘no kind Config’
noursh
May 14, 2021, 8:33pm
#5
I still face the same error as well
please advise
controlplane $ kubectl logs -n kube-system kube-proxy-g675h
F0514 20:21:36.897793 1 server.go:490] failed complete: failed to decode: no kind "Config" is registered for version "v1" in scheme "k8s.io/kubernetes/pkg/proxy/apis/config/scheme/scheme.go:29"
please find below a similar issue where the supplied config file was not relevant to the kube-proxy
opened 08:59AM - 08 Jun 18 UTC
closed 03:33PM - 21 Aug 18 UTC
## Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Deploy a highly avail… able k8s cluster. Each time kubeadm init, kube-proxy user authority kubeconfig file server value will become the last kubeadm init apiserver address.
## Versions
-**kubeadm version** (use `kubeadm version`):
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
- **Environment**:
- **Kubernetes version** (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}- **Cloud provider or hardware configuration**:
none
- **OS** (e.g. from /etc/os-release):
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
- **Kernel** (e.g. `uname -a`):
Linux node1 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
- **Others**:
## What happened?
kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready master 31m v1.10.2
node2 Ready master 30m v1.10.2
node3 Ready master 30m v1.10.2
node4 Ready node 30m v1.10.2
[root@node1 kubespray]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-apiserver-node1 1/1 Running 0 27m
kube-apiserver-node2 1/1 Running 0 27m
kube-apiserver-node3 1/1 Running 0 27m
kube-controller-manager-node1 1/1 Running 0 27m
kube-controller-manager-node2 1/1 Running 0 27m
kube-controller-manager-node3 1/1 Running 0 27m
kube-dns-7bd4d5fbb6-dwlkg 3/3 Running 0 25m
kube-dns-7bd4d5fbb6-wr57v 3/3 Running 0 25m
kube-flannel-5bs56 2/2 Running 0 26m
kube-flannel-62zgt 2/2 Running 0 26m
kube-flannel-l5kw6 2/2 Running 0 26m
kube-flannel-lgh22 2/2 Running 0 26m
kube-proxy-4h9th 1/1 Running 0 27m
kube-proxy-5bdfm 1/1 Running 0 28m
kube-proxy-gvwpr 1/1 Running 0 28m
kube-proxy-rv2r4 1/1 Running 0 28m
kube-scheduler-node1 1/1 Running 0 27m
kube-scheduler-node2 1/1 Running 0 27m
kube-scheduler-node3 1/1 Running 0 27m
kubedns-autoscaler-679b8b455-l7db8 1/1 Running 0 25m
kubernetes-dashboard-55fdfd74b4-pdrml 1/1 Running 0 25m
nginx-proxy-node4 1/1 Running 0 27m
[root@node1 kubespray]# kubectl exec -it kube-proxy-4h9th /bin/bash -n kube-system
root@node4:/# cat /var/lib/kube-proxy/kubeconfig.conf
apiVersion: v1
kind: Config
clusters:
cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://10.10.0.6:6443
name: default
contexts:
context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokenroot@node4:/# exit
exit
[root@node1 kubespray]# kubectl exec -it kube-proxy-rv2r4 /bin/bash -n kube-system
root@node2:/# cat /var/lib/kube-proxy/kubeconfig.conf
apiVersion: v1
kind: Config
clusters:
cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://10.10.0.6:6443
name: default
contexts:
context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokenroot@node2:/#
The kube-proxy' kubeconfig server value of all nodes points to the same apiserver. Is there any problem with this configuration?
## What you expected to happen?
The kube-proxy kubeconfig value server points to the apiserver address of its own node
noursh
May 14, 2021, 9:09pm
#6
Hi all
I repeated the exercise and checked the parameters and apparently the below parameter was configured in the daemon set which is mentioned in the answer guide
/var/answers/answer2.md
--config=/var/lib/kube-proxy/config.conf
so I assume that you did the same mistake I did and used the one mentioned in the configMap.
/var/lib/kube-proxy/kubeconfig.conf
1 Like
Hi all
i just did exactly as @noursh , found [ /var/lib/kube-proxy/kubeconfig.conf ] in the configmap and did use it to fix the DaemonSet, wich is not the correct to do.
the definition of the file name is at the begin of the datamap definition (between annotation and apiversion) and look like bellow. – to get it run [ kubectl -n kube-system describe configmaps kube-proxy ]
CONFIGMAP DEFINITION
##################################################
Annotations: kubeadm.kubernetes.io/component-config.hash: sha256:c15650807a67e3988e859d6c4e9d56e3a39f279034149529187be619e5647ea0
Data
====
config.conf:
----
apiVersion: kubeproxy.config.k8s.io/v1alpha1
##################################################
So, go ahead to fix your DaemonSet to point to [ /var/lib/kube-proxy/config.conf ] and boom! Get it done!
Thank you @noursh , your post did clarify a two hours investigation!