Even after implementing the exact solution given for the below question , we can curl the app from the pod
cyan-black-cka28-trb , so please fix this
One of the nginx based pod called cyan-pod-cka28-trb is running under cyan-ns-cka28-trb namespace and it is exposed within the cluster using cyan-svc-cka28-trb service.
This is a restricted pod so a network policy called cyan-np-cka28-trb has been created in the same namespace to apply some restrictions on this pod.
Two other pods called cyan-white-cka28-trb1 and cyan-black-cka28-trb are also running in the default namespace.
The nginx based app running on the cyan-pod-cka28-trb pod is exposed internally on the default nginx port (80).
Expectation: This app should only be accessible from the cyan-white-cka28-trb1 pod.
Problem: This app is not accessible from anywhere.
Troubleshoot this issue and fix the connectivity as per the requirement listed above.
Note: You can exec into cyan-white-cka28-trb and cyan-black-cka28-trb pods and test connectivity using the curl utility.
You may update the network policy, but make sure it is not deleted from the cyan-ns-cka28-trb namespace.
The egress policy you find in the existing network policy is a “red herring”. It does not feature in the solution to this problem, therefore you can ignore it.
There are two issues that need fixing here
The reason nothing can connect at the start is that the ingress port 8080 in the netpol is wrong. It should be 80. Why? Examine the pod cyan-pod-cka28-trb to which the policy is attached. Notice the the image is stock nginx from dockerhub. A stock webserver image always listens on port 80. So fix this part of the policy.
Now that’s fixed, everything in default namespace now has access to the pod on port 80, and curl will return the nginx default message. Thus we need to add to the rule a podSelector to ensure the incoming traffic can only come from the nominated pod in the default namespace, so it’s an AND rule
The finished product is therefore
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cyan-np-cka28-trb
namespace: cyan-ns-cka28-trb
spec:
egress:
- ports:
- port: 8080
protocol: TCP
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default
podSelector: # no dash before podSelector makes AND rather than OR
matchLabels:
app: cyan-white-cka28-trb
ports:
- port: 80
protocol: TCP
podSelector:
matchLabels:
app: cyan-app-cka28-trb
policyTypes:
- Ingress
- Egress
Curl commands for testing
k exec -n default cyan-black-cka28-trb -it -- curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc
k exec -n default cyan-white-cka28-trb -it -- curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc
I can’t get this to work either. i have copied and pasted your solution from here and there is no access from the cyan-white-cka28-trb pod. I will look at the detailed solution
actually it works now. the only changes needed was adding the podSelector section for cyan-white-cka28-trb without the dash in front of it and changing the ingress port to 80.
CORRECT @reteer - that is exactly how to fix the policy, and is shown in the github solution. It only doesn’t work if the lab hits the bug discussed further up in this thread.
so I came back and did the question again and the bug was present. so got rid of the dash in the podSelector statement and changed ingress port to 80. the cyan-pod-cka28-trb and the cyan-black-cka28-trb pods were on the same node so they were in the same CIDR. the cyan-white-cka28-trb pod was on a different node so I changed the pod config for the cyan-white-cka28-trb pod nodeName to match the node the other two pods were on. I then deleted and recreated the pod with the config change and that put it on the same node, same CIDR and the network policy then worked correctly.