Hi, In Multiple Scheduler lab, I have changes the --leader-elect and add schedul . . .

Deepak Mourya:
logs are not comming

Deepak Mourya:
root@controlplane:~# k logs my-scheduler -n kube-system
root@controlplane:~#

Basavraj Nilkanthe:
well, my side, it is moved ahead

Basavraj Nilkanthe:
but still liveness and readiness are failing

Basavraj Nilkanthe:
so it seems, we have to disable secure port and updating prove from https to http

Basavraj Nilkanthe:

Warning  Unhealthy  4s (x14 over 2m14s)  kubelet  Startup probe failed: Get "<https://127.0.0.1:10282/healthz>": http: server gave HTTP response to HTTPS client

Deepak Mourya:
i need to restart now, session close

Basavraj Nilkanthe:
Deepak this question itself is tricky and need more clarity on this from @Mumshad Mannambeth… Now I can test it and my-scheduler is working now…

root@controlplane:~# cat /tmp/my-scheduler.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: my-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=false
    - --scheduler-name=my-scheduler
    - --port=10282
    - --secure-port=0
    image: <http://k8s.gcr.io/kube-scheduler:v1.20.0|k8s.gcr.io/kube-scheduler:v1.20.0>
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10282
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10282
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

Basavraj Nilkanthe:

root@controlplane:~# kubectl get pods -A        
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-74ff55c5b-fjmhs                1/1     Running   0          26m
kube-system   coredns-74ff55c5b-v2m9n                1/1     Running   0          26m
kube-system   etcd-controlplane                      1/1     Running   0          26m
kube-system   kube-apiserver-controlplane            1/1     Running   0          26m
kube-system   kube-controller-manager-controlplane   1/1     Running   0          26m
kube-system   kube-flannel-ds-pbhn9                  1/1     Running   0          26m
kube-system   kube-proxy-8nmtd                       1/1     Running   0          26m
kube-system   kube-scheduler-controlplane            1/1     Running   0          26m
kube-system   my-scheduler-controlplane              1/1     Running   0          2m27s
root@controlplane:~# 

Deepak Mourya:
What changes have you made ?

Basavraj Nilkanthe:
1- Just adding new name - --scheduler-name=my-scheduler— Wont work because it conflict with port of default scheduler and it failing while binding it.
2- As per logs, we can chose different port, updating port at command section --port=282 and updating port at liveness and readiness probe doesnt work again… As it complain default scheduler is using secure-port… but why question why custom scheduler cant use secure port – @Mumshad Mannambeth @Tej
3- So inorder to make it non-secure port we have to add --secure-port=0 in command section and update both livenessa and readiness probe schema from https to http

Basavraj Nilkanthe:
@Deepak Mourya you can simply copy code which I have pasted

Deepak Mourya:
hmm ok let me test

Basavraj Nilkanthe:
@Mumshad Mannambeth Ideally we should able to run two container with same port in docker?

Gennway:
in theory yes, but as I remember hostNetwork: true it forces container to use the node’s network, so you cant use two containers on same ports with set hostNetwork: true . but I’m not sure

Deepak Mourya:
although it is running now, but things are not clear with this topic

Narendra Singh:
@Basavraj Nilkanthe A pod has its own networking namespace. Inside a pod all container share the network and storage. So, two conatiner in a pod can’t use same port. It should be dfferent.

Narendra Singh:
https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking

Basavraj Nilkanthe:
@Narendra Singh thats true… we can run multiple pod with same port and thats work

Basavraj Nilkanthe:
but question is why two docker container will run independently with same port and @Gennway I agree its depend on hostNetwork=true flag use hostnetwork and thats valid case we cant use same port for two container… but are we using same here?