Problem
Solve this question on: ssh cluster1-controlplane.
An nginx-based pod called cyan-pod-cka28-trb is running under the cyan-ns-cka28-trb namespace and is exposed within the cluster using the cyan-svc-cka28-trb service.
This is a restricted pod, so a network policy called cyan-np-cka28-trb has been created in the same namespace to apply some restrictions on this pod.
Two other pods called cyan-white-cka28-trb and cyan-black-cka28-trb are also running in the default namespace.
The nginx-based app running on the cyan-pod-cka28-trb pod is exposed internally on the default nginx port (80).
Expectation: This app should only be accessible from the cyan-white-cka28-trb pod.
Clarification needed
- As per the solution we need to add the following under egress section
- ports: - port: 80 protocol: TCP to: - ipBlock: cidr: 0.0.0.0/0
This problem clearly states that we only need to bother about incominf=g traffic from other two pods named cyan-black-cka28-trb and cyan-white-cka28-trb from default namespace.Any suggestion why we need to bothet egress policy then ?Ingress policy should be enough correct?
- Even after applying the solution provided I am still able to access the pod from cyan-black-cka28-trb
cluster1-controlplane ~ ➜ kubectl exec -it cyan-black-cka28-trb -- sh
/home # curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
Please let me know in case i missed out anything