Hi,
This is for internal Kubernetes cluster communication only. To allow inter-node communication, allowing Kubernetes pods to communicate with each other.
On a deployed test cluster, as you can see, the source can only come from this security group, which only allows port 443, all ports are not open to the world (0.0.0.0/0
):
Validation
I took one deployed node to confirm what I’m saying, connected on it, here is a list of what was open:
$ ss -auntpl
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:* users:(("dhclient",pid=2373,fd=6))
udp UNCONN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=2128,fd=6))
udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=2159,fd=5))
udp UNCONN 0 0 0.0.0.0:605 0.0.0.0:* users:(("rpcbind",pid=2128,fd=7))
udp UNCONN 0 0 [::]:111 [::]:* users:(("rpcbind",pid=2128,fd=9))
udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=2159,fd=6))
udp UNCONN 0 0 [fe80::cca:a6ff:febf:8bbc]%eth0:546 [::]:* users:(("dhclient",pid=2408,fd=5))
udp UNCONN 0 0 [::]:605 [::]:* users:(("rpcbind",pid=2128,fd=10))
tcp LISTEN 0 4096 127.0.0.1:50051 0.0.0.0:* users:(("aws-k8s-agent",pid=4853,fd=9))
tcp LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=3338,fd=32))
tcp LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=4610,fd=14))
tcp LISTEN 0 4096 127.0.0.1:35113 0.0.0.0:* users:(("kubelet",pid=3338,fd=15))
tcp LISTEN 0 4096 127.0.0.1:61679 0.0.0.0:* users:(("aws-k8s-agent",pid=4853,fd=12))
tcp LISTEN 0 128 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=2128,fd=8))
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=2700,fd=3))
tcp LISTEN 0 4096 0.0.0.0:31800 0.0.0.0:* users:(("kube-proxy",pid=4610,fd=11))
tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=2555,fd=13))
tcp LISTEN 0 4096 0.0.0.0:30845 0.0.0.0:* users:(("kube-proxy",pid=4610,fd=10))
tcp LISTEN 0 4096 127.0.0.1:36541 0.0.0.0:* users:(("containerd",pid=3016,fd=14))
tcp LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=3338,fd=31))
tcp LISTEN 0 4096 *:9100 *:* users:(("node_exporter",pid=31735,fd=3))
tcp LISTEN 0 4096 *:61678 *:* users:(("aws-k8s-agent",pid=4853,fd=11))
tcp LISTEN 0 128 [::]:111 [::]:* users:(("rpcbind",pid=2128,fd=11))
tcp LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=4610,fd=9))
tcp LISTEN 0 4096 *:32468 *:* users:(("kube-proxy",pid=4610,fd=12))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=2700,fd=4))
As you can see, several ports are listening to 0.0.0.0. However, they are not accessible from the internet, only from the private network. To check it, I ran nmap
on default ports on the external IP of this host:
$ nmap -Pn ec2-x-x-x-x.eu-west-3.compute.amazonaws.com
Starting Nmap 7.93 ( https://nmap.org ) at 2023-04-18 10:52 CEST
Nmap scan report for ec2-35-180-178-121.eu-west-3.compute.amazonaws.com (35.180.178.121)
Host is up.
All 1000 scanned ports on ec2-35-180-178-121.eu-west-3.compute.amazonaws.com (35.180.178.121) are in ignored states.
Not shown: 1000 filtered tcp ports (no-response)
Nmap done: 1 IP address (1 host up) scanned in 201.30 seconds
As you can see, nothing is open. For a 2nd validation, I took 2 ports listed with the ss
command (22 and 111) which you can see are listening to 0.0.0.0
:
$ nc -w 3 -v ec2-x-x-x-x-eu-west-3.compute.amazonaws.com 111
nc: connect to ec2-35-180-178-121.eu-west-3.compute.amazonaws.com (35.180.178.121) port 111 (tcp) timed out: Operation now in progress
$ nc -w 3 -v ec2-x-x-x-x.eu-west-3.compute.amazonaws.com 22
nc: connect to ec2-35-180-178-121.eu-west-3.compute.amazonaws.com (35.180.178.121) port 22 (tcp) timed out: Operation now in progress
It went into timeouts because connectivity couldn’t be established as port are not open to the outside.
Additional notes
if you deploy load balancers (K8s Nodeports) on this cluster manually or via Qovery when deploying public containers, Kubernetes uses dedicated ports (30000-32767) which are exposed to the outside. So those ports will be publicly accessible through one of those random ports (more info: Ports and Protocols | Kubernetes).
In general, it’s not an issue as you’ve requested those services to be public by deploying a load balancer.
Anyway, if you still want to restrict all external access, except the 443 port, please set the Qovery “static ip”[1][2] feature. This will bring an extra cost because it setups AWS NATs gateways in 3 zones for high availability.
I hope everything is clear.
Pierre