We noticed that internal network requests between our applications are coming from IP 100.64.x.x which is not a local IP. Can you explain how inter-cluster application communication is set up? What IP range is used? We need this for an application that needs IP whitelisting.
Do you mind being a little more specific? From what I understand, you have two applications running on the same cluster communicating and what you see from one app is that calls come from the outside and not from the internal network.
Am I right?
Do you mind sharing Qovery’s console link of those 2 apps so I can have a look?
We have two applications running on the same cluster communicating via the internal network: front-end and back-end. However, when we check the request IP on the back-end application we get IP 100.64.x.x (external) while we would expect it to be in 10.x.x.x (local). We think there’s some Kubernetes CNI plugin installed proxying traffic and using 100.64.x.x IP range. Is this correct? What IP range is being used?
Yes indeed CNI plugin is installed, but first I would like to know what address / ENV var your frontend is using to communicate with the backend?
I would expect your apps communicating with your backend to use it’s internal hostname QOVERY_APPLICATION_ZB84A9709_HOST_INTERNAL which should result in using internal network instead of external one (as opposed to QOVERY_APPLICATION_ZB84A9709_HOST_EXTERNAL which will use external network).
Can you confirm which value your front is using?
Also, since this is a frontend app, I suppose it’s serving static files which are executed client side, so I would expect those requests to use external network.
Can you confirm?
Yes, it is using the internal hostname app-zb84a9709-agent-api. If I use qovery shell I can see this hostname resolves to a 10.x.x.x IP.
It is a front-end application with server-side rendering, the back-end is being called from the server-side code. It wouldn’t succeed in the client-side context because an internal hostname is used.
Ok, so, after investigation, It’s normal, on Scaleway we use default managed config and don’t specify any CIDR blocks. Scaleway seems to use this range for its pod IPs along with others.
The flow is:
QOVERY_APPLICATION_ZB84A9709_HOST_INTERNAL resolves to service app-zb84a9709-agent-api which belongs to the service range. On scaleway it seems to be 10.x.
Service app-zb84a9709-agent-api balances to pod using pod endpoints IPs (100.64.236.102)
You are right, those IPs are virtual and are handled by kube/CNI making it map the proper target.