Inter-cluster networking

Hi,

We noticed that internal network requests between our applications are coming from IP 100.64.x.x which is not a local IP. Can you explain how inter-cluster application communication is set up? What IP range is used? We need this for an application that needs IP whitelisting.

Hi,

We would appreciate it if you guys shared how the cluster networking is set up.

Hey @prki !

Do you mind being a little more specific? From what I understand, you have two applications running on the same cluster communicating and what you see from one app is that calls come from the outside and not from the internal network.
Am I right?

Do you mind sharing Qovery’s console link of those 2 apps so I can have a look?

Cheers

Hi @bchastanier,

We have two applications running on the same cluster communicating via the internal network: front-end and back-end. However, when we check the request IP on the back-end application we get IP 100.64.x.x (external) while we would expect it to be in 10.x.x.x (local). We think there’s some Kubernetes CNI plugin installed proxying traffic and using 100.64.x.x IP range. Is this correct? What IP range is being used?

Hi @prki,

Yes indeed CNI plugin is installed, but first I would like to know what address / ENV var your frontend is using to communicate with the backend?
I would expect your apps communicating with your backend to use it’s internal hostname QOVERY_APPLICATION_ZB84A9709_HOST_INTERNAL which should result in using internal network instead of external one (as opposed to QOVERY_APPLICATION_ZB84A9709_HOST_EXTERNAL which will use external network).
Can you confirm which value your front is using?

Also, since this is a frontend app, I suppose it’s serving static files which are executed client side, so I would expect those requests to use external network.
Can you confirm?

Cheers

Yes, it is using the internal hostname app-zb84a9709-agent-api. If I use qovery shell I can see this hostname resolves to a 10.x.x.x IP.

It is a front-end application with server-side rendering, the back-end is being called from the server-side code. It wouldn’t succeed in the client-side context because an internal hostname is used.

Indeed, that’s clear, let me do some testing on my end so I can have a better explanation.

This behavior is happening on Scaleway right? I was looking at AWS.

On SCW, we are using default scaleway managed provided stack: cilium + kube-proxy.

Ok, so, after investigation, It’s normal, on Scaleway we use default managed config and don’t specify any CIDR blocks. Scaleway seems to use this range for its pod IPs along with others.

The flow is:

  • QOVERY_APPLICATION_ZB84A9709_HOST_INTERNAL resolves to service app-zb84a9709-agent-api which belongs to the service range. On scaleway it seems to be 10.x.
  • Service app-zb84a9709-agent-api balances to pod using pod endpoints IPs (100.64.236.102)

CleanShot 2024-01-31 at 15.09.52

You are right, those IPs are virtual and are handled by kube/CNI making it map the proper target.

Is this documented somewhere? Can we be sure the range is 100.64.0.0/16?

Unfortunately I am not finding anything on Scaleway side but a feature-request asking to be able to customize pods CIDRs => Kapsule / K8S - Custom Service / Pod CIDR · Scaleway Feature Requests

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.