AWS EKS Cluster Endpoint Access

Our team currently has a security requirement to have our EKS Cluster endpoint access set to private. This setting can be found under: EKS → Clusters → qovery-{cluster-id} → Networking → Manage endpoint access. Currently it is set to public with an allowlist of

I tested changing it to private on a dev cluster and our applications still appear to be functioning. Is this safe to do? Will this affect future Qovery cluster operations or a drift in the configuration? Can we get this setting added to the Advanced Settings section of the Cluster Settings?

1 Like

After more testing, it prevents any new deployments from occurring. Currently running applications were not affected. Is there a CIDR I can use for the Public and private access that is not

I have the same question. I’d be great to have more details about how to handle this.

1 Like

Any ideas on this one? Even an interim solution or setting change?

Hi @colin ,

we need those endpoints to be public since the Qovery engine needs access to your Kubernetes cluster from our infrastructure. If you set them to private, every deployment will fail.

Regarding the access, we wanted to provide a solution yesterday which unfortunately failed. TLDR; we would like to limit the IP addresses used but we would be rate-limited by 3rd parties (like dockerhub ) during the build phase.
This is why we reverted the change and we cannot provide you with a limited set of IPs.

The only option I see for now is to let the Qovery engine run on your cluster, making it possible to have the EKS cluster private, but it is something that will be available in 1-2 months.


This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.