Worker-app-pod getting error again and again

Name: worker-app-pod
Namespace: default
Priority: 0
Service Account: default
Node: minikube/192.168.49.2
Start Time: Wed, 30 Jul 2025 00:58:27 +0500
Labels: app=demo-voting-app
name=worker-app-pod
Annotations:
Status: Running
IP: 10.244.0.83
IPs:
IP: 10.244.0.83
Containers:
worker-app:
Container ID: docker://87868dea51db43d4cdbc4341185c1f1252a6be375f62fc296a98e19b2d775e60
Image: kodekloud/examplevotingapp_worker:v1
Image ID: docker-pullable://kodekloud/examplevotingapp_worker@sha256:741e3aaaa812af72ce0c7fc5889ba31c3f90c79e650c2cb31807fffc60622263
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 30 Jul 2025 01:12:18 +0500
Finished: Wed, 30 Jul 2025 01:12:18 +0500
Ready: False
Restart Count: 5
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g5kpb (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-g5kpb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 15m default-scheduler Successfully assigned default/worker-app-pod to minikube
Normal Pulling 15m kubelet Pulling image “kodekloud/examplevotingapp_worker:v1”
Normal Pulled 4m47s kubelet Successfully pulled image “kodekloud/examplevotingapp_worker:v1” in 10m53.036s (10m53.036s including waiting). Image size: 1719987312 bytes.
Normal Created 110s (x6 over 4m47s) kubelet Created container: worker-app
Normal Started 110s (x6 over 4m47s) kubelet Started container worker-app
Normal Pulled 110s (x5 over 4m47s) kubelet Container image “kodekloud/examplevotingapp_worker:v1” already present on machine
Warning BackOff 1s (x24 over 4m45s) kubelet Back-off restarting failed container worker-app in pod worker-app-pod_default(ce9e3e31-cb79-4edd-b3ce-ee32faa54575)

The best way to debug this I’d guess is to look at the logs. What does the worker pod logs look like?

Authentication method not supported (Received: 10)
at Npgsql.NpgsqlConnector.ParseServerMessage(ReadBuffer buf, BackendMessageCode code, Int32 len, DataRowLoadingMode dataRowLoadingMode, Boolean isPrependedMessage)
at Npgsql.NpgsqlConnector.DoReadMessage(DataRowLoadingMode dataRowLoadingMode, Boolean isPrependedMessage)
at Npgsql.NpgsqlConnector.ReadMessageWithPrepended(DataRowLoadingMode dataRowLoadingMode)
at Npgsql.NpgsqlConnector.HandleAuthentication(String username, NpgsqlTimeout timeout)
at Npgsql.NpgsqlConnector.Open(NpgsqlTimeout timeout)
at Npgsql.ConnectorPool.Allocate(NpgsqlConnection conn, NpgsqlTimeout timeout)
at Npgsql.NpgsqlConnection.OpenInternal()
at Worker.Program.OpenDbConnection(String connectionString) in /code/src/Worker/Program.cs:line 78
at Worker.Program.Main(String[] args) in /code/src/Worker/Program.cs:line 19

This is a common problem with the worker pod – it’s having problems authenticating the postgresql database. A quick work-around that will get you through this lab you can do by reconfiguring how postgresql handles authentication. This isn’t something you’d do in production app, but given that this is demo code and not particularly secure anyway, it will get you through the exercise:

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: postgres-pod
    app: demo-voting-app
  name: postgres-pod
spec:
  containers:
  - name: postgres
    image: postgres:15
    ports:
    - containerPort: 5432
    env:
      - name: POSTGRES_USER
        value: "postgres"
      - name: POSTGRES_PASSWORD
        value: "postgres"
      - name: POSTGRES_HOST_AUTH_METHOD
        value: trust
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

Note I’ve added POSTGRES_HOST_AUTH_METHOD to the environment. This will make sure that the worker will not get upset if it starts up before postgresql is ready, and will prevent this particular error.