AWS Cluster scaling issue - how does it work?

Hi there,

We have deployed a cluster with qovery (t3.large instance) with 3 instances minimum.

Here are our nodejs app settings

When monitoring with netdata, I can see there is like 6 or more instances ON

Why is there so much instances instead of 3 ?
We are currently not using these app from quite a long time (like one hour). I expect qovery to downscale our cluster to the minimum instance
Thanks
Jérémy.

@Pierre_Mavro is it normal behavior?

cc @Anouck_Colson it would be great to improve the documentation on how the auto-scaler works if it is not already the case.

Hi @JrmyDev ,

Based on the AWS website, t3.large have 2 vCPU (Amazon EC2 Instance Types - Amazon Web Services):

In your case, with Netdata and your app, you’re regularly at the limit of the allocation on your 5 nodes. Some pods can’t be re-allocated because they need to run on every server. Kubernetes can’t do anything here. You can have a look at it:

And combined with Memory allocation as well, it makes Kubernetes not able to find a way (by removing a node) to re-allocate resources running on it.

In this specific case, I advise you to take bigger instances (but less), so you will not waste as many resources as today.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.