Question on network policy

Q.For this question, please set the context to cluster1 by running:
kubectl config use-context cluster1
One of the nginx based pod called cyan-pod-cka28-trb is running under cyan-ns-cka28-trb namespace and it is exposed within the cluster using cyan-svc-cka28-trb service.
This is a restricted pod so a network policy called cyan-np-cka28-trb has been created in the same namespace to apply some restrictions on this pod.
Two other pods called cyan-white-cka28-trb1 and cyan-black-cka28-trb are also running in the default namespace.
The nginx based app running on the cyan-pod-cka28-trb pod is exposed internally on the default nginx port (80).
Expectation: This app should only be accessible from the cyan-white-cka28-trb1 pod.
Problem: This app is not accessible from anywhere.
Troubleshoot this issue and fix the connectivity as per the requirement listed above.
Note: You can exec into cyan-white-cka28-trb and cyan-black-cka28-trb pods and test connectivity using the curl utility.
You may update the network policy, but make sure it is not deleted from the cyan-ns-cka28-trb namespace.

→ I updated the network policy as below:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  creationTimestamp: "2023-08-27T11:47:23Z"
  generation: 3
  name: cyan-np-cka28-trb
  namespace: cyan-ns-cka28-trb
  resourceVersion: "11275"
  uid: da47e1c8-42db-4a67-860c-8e0174f4dc3f
spec:
  egress:
  - ports:
      port: 80
      protocol: TCP
    to:
    - ipBlock:
        cidr: 0.0.0.0/0
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: default
      podSelector:
        matchLabels:
           app: cyan-white-cka28-trb
    ports:
    - port: 80
      protocol: TCP
  podSelector:
    matchLabels:
      app: cyan-app-cka28-trb
  policyTypes:
  - Ingress
  - Egress

But still i am not able to access service from cyan white pod. I am using below command to exec into service

kubectl exec -it cyan-white-cka28-trb -- curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local

Can u tell me where i am making mistake?

Please see the solution here: Cka mock labs about network policies

Thanks Ali for the detailed explanation.

one doubt here why have you kept egress port to be 8080. Shouldn’t it be 80 as the cyan-pod-cka28-trb is exposed on port 80 as mentioned in the question.

Also, everything is same but still i am not able to access svc from cyan-white.

Can you pls check my netpol file, i dont understand why it is not working

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  creationTimestamp: "2023-08-27T19:27:42Z"
  generation: 4
  name: cyan-np-cka28-trb
  namespace: cyan-ns-cka28-trb
  resourceVersion: "16066"
  uid: 4f4dfc86-f815-47ce-8b7c-d3877f47629b
spec:
  egress:
  - ports:
    - port: 8080
      protocol: TCP
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: default
      podSelector:
        matchLabels:
          app: cyan-white-cka28-trb
    ports:
    - port: 80
      protocol: TCP
  podSelector:
    matchLabels:
      app: cyan-app-cka28-trb
  policyTypes:
  - Ingress
  - Egress

Hi @uzmashafi061

I have created a more detailed write-up here: https://github.com/kodekloudhub/certified-kubernetes-administrator-course/blob/master/docs/16-Ultimate-Mocks/02-Troubleshooting/docs/19-C1-netpol-cyan-pod-cka28-trb.md

1 Like

Hi Alistair,

I did exactly the same but it didn’t work. The nslookup for the “cyan-svc”.“cyan-ns”.svc.cluster.local also returns NXDOMAIN from the cyan-white pod. please advise

You must have missed something, because it does work

Sorry but it did’nt work for me too .

Please revisit the write-up in the post above. There is updated information in there about an intermittent bug - which can be worked around if you learned the manual scheduling topic well :wink:

1 Like

I was wondering why my network policy was always failing, I had the dash before the pod selector! Thanks for the explanation

@robertointernet The key thing to remember here is that - adds a new rule, and all the rules are OR-ed together. So if rule 1 allows (namespace selector without pod constraint), OR rule 2 allows (pod selector without namespace constraint), then it’s allow.

Multiple components of a single rule are AND-ed - so must match the pod selector AND the namespace selector to allow. Thus a rule must contain both namespace and pod selectors if you want to allow certain pods only from a given namespace. Then you can add more rules with both in to allow also some pods from another namespace etc.

Get this ingrained and you will nail all pod-based netpol questions which to be fair are most of them!

Once again to everyone, be aware of the intermittent bug in this lab, which is detailed in the solution write-up posted above.

1 Like