CrashLoopBackOff error when running worker-app-pod

Hi @Tej-Singh-Rana ,

I used the image mentioned in the video :
source: GitHub - dockersamples/example-voting-app: Example distributed app composed of multiple containers for Docker, Compose, Swarm, and Kubernetes.
I build the images and pushed them to my docker hub repo.

I was able to get the voting app screen and result app screen up with port:30004 and 30005 respectively but there is problem with the worker app .as I am always getting CrashLoopBackOff msg.
due to this, the data is not being updated in the Postgres database.

1 Like

I have the same issue. Help please. Is the image or the image source not right anymore?

1 Like

I’m going through same issue also, I don’t want to submit it like that because I believe I will fail the task. Something is wrong with the image or dns issue.

1 Like

same issue. help? something rioung with worker image?

1 Like

Hi,

Kindly follow these sequence for creation of Deployments & Services in questions

Vote Deployment + Service
Redis Deployment + Service
DB Deployment + Service
Result Deployment + Service
Worker Deployment

This is resolving an issue of worker pod crashing.

Please follow these full steps to know what you have missed and try again :

Note: We will create deployments again so please before following the steps, Run kubectl delete deployment --all to delete old deployments and avoid any conflicts.

  1. Run git clone https://github.com/mmumshad/example-voting-app-kubernetes-v2.git

  2. Run cd example-voting-app-kubernetes-v2/

  3. Run vim postgres-deployment.yml and modify it’s content as below then save and exit.

Note: It’s a new update from Postgres.

kind: Deployment
metadata:
  name: postgres-deployment
  labels:
    app: demo-voting-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: postgres-pod
      app: demo-voting-app
  template:
    metadata:
      name: postgres-pod
      labels:
        name: postgres-pod
        app: demo-voting-app
    spec:
      containers:
      - name: postgres
        image: postgres:9.4
        env:
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          value: postgres
        - name: POSTGRES_HOST_AUTH_METHOD
          value: trust
        ports:
        - containerPort: 5432
  1. Run kubectl create -f . if you create deployments for the first time, if you created the same deployments before Run kubectl apply -f . .

  2. Run kubectl get service to get the exposed ports.

For example if the output of the command as above you can accces the voting app by hitting One_of_the_worker_nodes_IP:32733 on your browser and the same for the resulting app >> One_of_the_worker_nodes_IP:30013.

Check :

Note: The voting application only accepts one vote per client. It does not register votes if a vote has already been submitted from a client.

Hope this helps!

1 Like

Hi Shivansh did you get a fix for your problem as I am getting the same issue

Hi @drwolf Did you find fix ?

@Ayman Could you please give more details about this change on the postgress manifest? Why keep the user and password if I’m passing the POSTGRES_HOST_AUTH_METHOD as ‘trust’?

There was a problem in the worker app original image and this was a workaround to add both credential methods so that the voting app works fine.

But for now, this issue is resolved, and using POSTGRES_HOST_AUTH_METHOD without adding the password will work fine.

2 Likes

Now, it doesn’t connect to Redis… @Ayman

Kindly clarify which modification you have done with?

Sorry, I had unintentionally modified the Redis’ ports. Problem solved. Thank you very much.

With pleasure and happy learning

For me adding in env: POSTGRES_HOST_AUTH_METHOD value: trust in postgres pod worked finally :smiley:

5 Likes

getting error “CrashLoopBackOff” in creating pods for worker app. although my all the other 4 aps are installed and running fine.

Hello @deovrat.dubey,
Please follow these full steps to know what you have missed and try again :

Note: We will create deployments again so please before following the steps, Run kubectl delete deployment --all to delete old deployments and avoid any conflicts.

  1. Run git clone https://github.com/mmumshad/example-voting-app-kubernetes-v2.git

  2. Run cd example-voting-app-kubernetes-v2/

  3. Run vim postgres-deployment.yml and modify it’s content as below then save and exit.

Note: It’s a new update from Postgres.

apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: postgres-deployment
   labels:
     app: demo-voting-app
 spec:
   replicas: 1
   selector:
     matchLabels:
       name: postgres-pod
       app: demo-voting-app
   template:
     metadata:
       name: postgres-pod
       labels:
         name: postgres-pod
         app: demo-voting-app
     spec:
       containers:
       - name: postgres
         image: postgres:9.4
         env:
         - name: POSTGRES_USER
           value: postgres
         - name: POSTGRES_PASSWORD
           value: postgres
         - name: POSTGRES_HOST_AUTH_METHOD
           value: trust
         ports:
         - containerPort: 5432
  1. Run kubectl create -f . if you create deployments for the first time, if you created the same deployments before Run kubectl apply -f . .

  2. Run kubectl get service to get the exposed ports.

For example if the output of the command as above you can accces the voting app by hitting One_of_the_worker_nodes_IP:32733 on your browser and the same for the resulting app >> One_of_the_worker_nodes_IP:30013.

Check :

Note: The voting application only accepts one vote per client. It does not register votes if a vote has already been submitted from a client.

Hope this helps!

Thanks,
KoudKloud Support

1 Like

HI Ayman, you might be suggesting me for Deploying microservices using Deployments.
but i want to deploy microservice via installing pods and services manually one by one. in this case all the services and pods are installed and running except the worker pod which is giving the error can you provide me the solution and reason for this case.

One ore thing i can’t see you are making any changes to worker node pod, which is where i am facing the issue.

Because the worker can’t authenticate with the postgress service so we update the postgress YAML file to make the authentication works correctly and avoid falling into a loop.

4 Likes