EKS Node autoscaling (down)

Issues information

  • OS:
  • databases: none
  • Programming language and version: TS 4.x
  • Link to your project on Github/Gitlab: n/a

Hi, I have a project deployed with qovery on a single EKS cluster. My staging blueprint consist in two light node.js app. I have activated preview envs on this project. At some point I had several branches, and so preview envs where created and my EKS node numbers went up.

Those branches and preview-env are now gone (merged etc…), but my EKS cluster is not scaling down (still have five ec2 nodes to handle only my staging deployment of two node.js very small apps :sweat_smile:).

Is there a way to explicitly or automatically ask Qovery to downscale / optimise number of node (beside constraining my cluster to less number of nodes through the cluster settings) ?

Thank you for your insights.

Hello, could you give me the link of the app on Qovery console in order to check resources consumption ?

Hi @Yann - I highly encourage you to read this thread. It’s quite insightful on how Kubernetes resources allocation work.

And to apply what you have read, here is the explanation for your cluster.

As you can see here, each node have 1.9 CPU and 3.3 Gi RAM allocatable. If you take a look at free RAM for all nodes you have 3.7Gi which is more than the value allocatable for one node (3.3Gi). But look at free CPU, you have 1.6 CPU free while a node has 1.9 CPU allocatable.

In order to down scale and delete a node, all pods must switch to other nodes and for this you need to have at least free resources equivalent to the node that could be deleted:

  • 3.3Gi free RAM. This one is ok since you have 3.7 Gi free
  • 1.9 free CPU. This one is not good since you have only 1.6 CPU free

It explains why you have four nodes running and why scale down is not triggered. Hope it makes it clear for you.

Feel free if you have any questions about this resources consumption.

Thank you both for your answers, it gave me some great insights :pray:

I dig a little bit further, the two apps I have are configured with 0.5CPU each in qovery. That’s obviously too high as they seem to consume much less, so I’ll change that.

but speaking of the 1.9 free CPU target to downscale, most of the pods in the cluster are kube-system and qovery managed pods, not my applicative pods. Those are consuming much more ressources than before and I don’t really have a control over the ressources / limits they request ?

As you can see here, only two pods are related to my apps (the app-z... pods). How can I influence other pods CPU ressources / limits ?

You can’t. All other pods are mandatory to run EKS cluster properly and are already optimized.

If you run only those two apps and don’t need more, you can try our EC2 feature. It’s in beta for the moment but it will drastically reduce your AWS cost since it won’t populate a cluster but only an EC2 instance.