We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it. Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80 Important: Don’t delete any current objects deployed.
The solution provided is to create a Network Policy on ingress port 80
By default, a pod is non-isolated for ingress; all inbound connections are allowed. A pod is isolated for ingress if there is any NetworkPolicy that both selects the pod and has “Ingress” in its policyTypes ; we say that such a policy applies to the pod for ingress. When a pod is isolated for ingress, the only allowed connections into the pod are those from the pod’s node and those allowed by the ingress list of some NetworkPolicy that applies to the pod for ingress. The effects of those ingress lists combine additively.
so provision of a network policy is to limit the connectivity, not provide the connectivity right?
so I dont understand why the solution works. could you please help? thanks in advance.
With no network policy, then no connection is blocked.
With the given solution, we have
A podSelector which targets any pod having the label run=np-test-01 to which the policy will be applied.
A policyType of Ingress meaning only ingress rules will apply to this pod.
A single rule which specifies only TCP port 80. There are no constraints on namespaces or other pods, therefore the rule permits port 80 (and only port 80) from anywhere.
with no specific rules, which means that it blocks everything from everywhere and applies to all pods in the namespace.
To fix the situation without altering existing objects - meaning that we cannot do anything to the existing netpol, is to create a new netpol targeting the concerned pod.
A specific policy attached to a pod overrides rules from any general policy, such as the default-deny one.
We have added a new policy attached to the pod which allows ingress on port 80. This overrides the general deny everything that is set by the existing network policy called default-denyfor this pod only.
The idea of putting a default deny policy in a namespace is so that nobody can launch a pod in the namespace that has any network access unless it is explicitly allowed by creating a new network policy for that pod with rules to say what is allowed.
Remember that an ingress or egress rule allows something. Anything not mentioned in the rules is denied - that is the “limiting” part.
The evaluation process works like this
Consider all polices which target this pod. In this case, there is default-deny and np-test-1.
Combine the rules from all polices to get a final set of rules. In this case is is no rules (from default-deny) plus allow port 80 (from np-test-1). Result of this combination being allow port 80
wow, this helps me a lot – thank you!
I got lost in the troubleshooting phase because one of my first hypothesis was to debug the dns service in the kube-system namespace. No problems detected here (I used k events and k logs for the pods, but nothing came close to a potential solution as I couldn’t even identify the issue!).
This was kind of weird because the dns addon allows name resolutions and internal connectivity via a service by default. So, does the default network policy take precedence over the initial dns config by blocking all ingress traffic to the cluster?
I then thought this may be a misconfiguration of kube-proxy but that hypothesis went out of the window as well since the svc was correctly bound to the pod with the correct labels/selectors and endpoints.
I would not have thought to describe the existing network policy to see where the problem was. Do you have any additional resources for networking troubleshooting?
Googling that does not give enough relevant articles on the matter.
Thank you Alistair