When scaling down happens on a cluster?

Hi,

We saw a cluster size increase on a recent K8S update to 1.25. The cluster grew from 8 to 10 nodes and never went down. I’m pretty sure the usage recently went down so the increase in size doesn’t make sense. I also checked with kubectl-view-allocations and I see pretty low resource allocation:

cpu                           (37%) 14.4    (44%) 17.1         39.2    22.1
memory                      (42%) 28.0Gi  (59%) 38.9Gi       66.0Gi  27.1Gi

When scaling down happens? Is this normal? Is there any way to trigger compacting to move pods into fewer nodes and start scaling down? We did try full environment redeployment, but it did not help.

Hi @prki

Can you please share the console URL of the cluster so I can take a look. In doubt, please run a cluster “redeploy”. Autoscaling is regularly evaluated and if a change should be made, it’s every 10 minutes.

Pierre

Hi @Pierre_Mavro, the cluster in question is this one. Thanks for your help.

After a quick look, you’ve some application with large memory and cpu requests compared to your actual node size… And as you start having a bigger cluster in term of node number, I advise you to take bigger nodes. You’ll have less resource wasted.

Please give a try the double of nodes size, it should be better optimized and let me know the result.

Thanks

Yes, we recently added an application requiring significantly more resources but all our other applications are relatively small and should take up the little space left on the node running the large one.

I changed the node type, and it did help. I would like to change it back to see if it would now allocate pods more efficiently but node type changes produce downtime. I reported the issue here.