Nodeport service not accessible using master node

I managed to set up a kubernetes cluster on oracle cloud with kubeadm and flannel . My setup includes 1 master and 2 worker nodes . The cluster is live and working and i deployed an nginx image with nodeport service to expose it . But i can only access nginx on ips for worker node and unable to curl or open it on master node ips .

Image: Canonical-Ubuntu-22.04-Minimal-aarch64-2023.06.30-0 from oracle
there is no firewall on it and i used iptable to allow the necessary ports (and for test purposes allowed all ingress and egress traffic with security rules)

for master:
sudo iptables -I INPUT -p tcp --dport 6443 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 2379:2380 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 10248:10260 -j ACCEPT
sudo iptables -I INPUT -p udp --dport 8285 -j ACCEPT
sudo iptables -I INPUT -p udp --dport 8472 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 443 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 30000:32767 -j ACCEPT

for worker:
sudo iptables -I INPUT -p tcp --dport 10248:10260 -j ACCEPT
sudo iptables -I INPUT -p udp --dport 8285 -j ACCEPT
sudo iptables -I INPUT -p udp --dport 8472 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 30000:32767 -j ACCEPT

my guess is i need to open up some sort of port , any help is appreciated :slight_smile:

I don’t see anything obviously wrong with the firewall. But what’s the yaml for the service definition of your nginx service?

This is the yaml file that is created

apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:“v1”,“kind”:“Service”,“metadata”:{“annotations”:{},“name”:“nginx-service”,“namespace”:“default”},“spec”:{“ports”:[{“port”:80,“protocol”:“TCP”,“targetPort”:80}],“selector”:{“app”:“nginx”},“type”:“NodePort”}}
creationTimestamp: “2023-07-19T10:34:26Z”
name: nginx-service
namespace: default
resourceVersion: “103392”
uid: 02620be0-c254-4fde-a601-a4a6350302a4
spec:
clusterIP: 10.100.131.75
clusterIPs:

  • 10.100.131.75
    externalTrafficPolicy: Cluster
    internalTrafficPolicy: Cluster
    ipFamilies:
  • IPv4
    ipFamilyPolicy: SingleStack
    ports:
  • nodePort: 31608
    port: 80
    protocol: TCP
    targetPort: 80
    selector:
    app: nginx
    sessionAffinity: None
    type: NodePort
    status:
    loadBalancer: {}

the deployment gets exposed on both worker nodes , but not the master :confused:

i had 3 replicas and and since there was at least one deployment on both workers they were accessible and now when i decreased the replica to 1 , the deployment is only accessible on worker node where it is deployed not on the other worker node ( and not on master as well ) .

Found the problem in my iptable :smiley:

Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-PROXY-FIREWALL all – anywhere anywhere ctstate NEW /* kubernetes load balancer firewall /
KUBE-FORWARD all – anywhere anywhere /
kubernetes forwarding rules /
KUBE-SERVICES all – anywhere anywhere ctstate NEW /
kubernetes service portals /
KUBE-EXTERNAL-SERVICES all – anywhere anywhere ctstate NEW /
kubernetes externally-visible service portals */
REJECT all – anywhere anywhere reject-with icmp-host-prohibited
FLANNEL-FWD all – anywhere anywhere

the reject rule was one causing the issue

what i did to fix it :-

sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited
sudo iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited

i am not sure why this happened maybe i did something during the configuration :thinking: