Even after changing everything (scheduler name and leader elect and ports) the p . . .

sonali:
Even after changing everything (scheduler name and leader elect and ports) the pod is crashing repeatedly with below error:

I1227 05:52:30.919988       1 serving.go:331] Generated self-signed cert in-memory
failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use

unnivkn:
Hi @sonali may know which version of k8s you are working on ?

sonali:
version 1.20 its the lasb environment form KodeKloud

unnivkn:
Hi @sonali please try this: https://kodekloud.slack.com/archives/CHMV3P9NV/p1639486700427600?thread_ts=1639394915.381400&cid=CHMV3P9NV

sonali:
Hi @unnivkn… its still not working, as per the events the startup probe failed

Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  32s               default-scheduler  Successfully assigned kube-system/my-scheduler to controlplane
  Normal   Pulled     31s               kubelet            Container image "<http://k8s.gcr.io/kube-scheduler:v1.20.0|k8s.gcr.io/kube-scheduler:v1.20.0>" already present on machine
  Normal   Created    31s               kubelet            Created container my-scheduler
  Normal   Started    30s               kubelet            Started container my-scheduler
  Warning  Unhealthy  5s (x2 over 15s)  kubelet            Startup probe failed: Get "<https://127.0.0.1:10269/healthz>": http: server gave HTTP response to HTTPS client

sonali:
Logs:

root@controlplane:~# k logs my-scheduler -n kube-system
W1227 17:26:16.499046       1 authorization.go:47] Authorization is disabled
W1227 17:26:16.499127       1 authentication.go:40] Authentication is disabled
I1227 17:26:16.499140       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10269

sonali:
YAML file

root@controlplane:~# cat my-scheduler.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: my-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
#    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
#    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=false
    - --scheduler-name=my-scheduler  
    - --port=10269
    - --secure-port=0  
    image: <http://k8s.gcr.io/kube-scheduler:v1.20.0|k8s.gcr.io/kube-scheduler:v1.20.0>
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10269
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: my-scheduler
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10269
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

sonali:
@Aneek Bera Did you face any issues with startup probe? Follow up question from this thread: https://kodekloud.slack.com/archives/CHMV3P9NV/p1639486700427600?thread_ts=1639394915.381400&amp;cid=CHMV3P9NV

unnivkn:
Hi @sonali could you please try this yaml as it is & try: you can see startup probe code deleted below. please read all commented lines.

unnivkn:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane

modified name

name: my-scheduler
namespace: kube-system
spec:
containers:

  • command:
    • kube-scheduler
      #- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
      #- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    • –bind-address=127.0.0.1
    • –kubeconfig=/etc/kubernetes/scheduler.conf

    modified/added 5-fields below

    modified port

    port: 10285
    scheme: HTTPS
    
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 15

modified name

name: my-scheduler
resources:
  requests:
    cpu: 100m

deleted startup probe-having 10-lines here

volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
  name: kubeconfig
  readOnly: true

hostNetwork: true
priorityClassName: system-node-critical
volumes:

  • hostPath:
    path: /etc/kubernetes/scheduler.conf
    type: FileOrCreate
    name: kubeconfig
    status: {}

sonali:
Well, my question was around that, so a custom scheduler cannot have a startup probe ?

unnivkn:
I didn’t see it in the k8s doc… please refer this: https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/

sonali:
Thanks @unnivkn