Hi everyone, one query actually i am not able to see my pods getting automatical . . .

Shubham Sharma:
Hi everyone,
one query actually i am not able to see my pods getting automatically deleted after deployment got deleted … This thing i am facing after i have done my cluster upgrade ??
[root@master kubelet]# k get po
NAME READY STATUS RESTARTS AGE
nginx-77b4fdf86c-rpqwt 1/1 Running 0 12m
nginx-77b4fdf86c-zlpbb 1/1 Running 0 12m
nginx-9868564d6-2hndd 1/1 Running 0 12m
nginx-9868564d6-5tlps 1/1 Running 0 12m
[root@master kubelet]# k get deploy
No resources found in default namespace.
[root@master kubelet]#

Aneek Bera:
This issue sometimes occurs when the system is not stable or some form of communication is broken. Delete the pods using the force delete command.

Shubham Sharma:
yes, i tried to do the same but pods are auto created again. BTW my kube-api-master is in 0/1 state though it is in running state is this could be an issue ??
kube-apiserver-master 0/1 Running 0 5m30s
Warning Unhealthy 87s (x301 over 6m27s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500

Shubham Sharma:
to delete the pod manually i have to first delete the rs then pods are getting deleting… Seems like deployment is not removing the rs ,which cause pod still in running state for former deployment

Shubham Sharma:
Please suggest how can i resolve this issue ??

Fernando Carvalho(Fish):
How’s it going @Shubham Sharma? First, there is not enough information to understand the scenario, but I can think of two possible issues:

  1. Try to see if the namespace and labels&selectors met. The Deployment template should point to these Pods.
  2. rs and deployments are complementary but different. My hypothesis is these objects are overlapping their functions on the k8s cluster.
    Try to remove the rs and recreate your Deployment Object

Shubham Sharma:
sorry, to bother you guys … i guess it was some configuration or communication issue(kube-api server was not be able to making call) …
I start facing this issue after i upgraded my cluster to the latest version… Now i did kubeadm reset and config my cluster again , after this issue got resolved …
Also now my kube-apiserver-master pod in 1/1 state … but i think there might be some other solution instead of reset the cluster …
thanks a lot for reply @Fernando Carvalho(Fish) @Aneek Bera

Aneek Bera:
superb