Hello !
I’ll try to answer step by step.
First, the major control plane is actually Qovery, every modification done cloud provider side will be override by Qovery. This time it’s ok but careful, a wrong manipulation provider side could lead to break your cluster or the link with the Qovery control plane.
I lowered the “Desired” state of the Auto Scaling Group on AWS → It ended up being modified back (by Qovery, I guess?) a few minutes later, with two more nodes than originally, hehe…
Because cluster autoscaler handle the desired size by itself regarding resource consumption.
I lowered the range in the Qovery console and triggered an update → The range and desired state in AWS are still higher (max is 2x higher).
Edit: Looks like the current autoscaling group range is the same than on Qovery now, so I guess it just took the 15 minutes to update on AWS?
When you modify nodes pool it take around 20 minutes to be effective, this could explain the difference between AWS UI and Qovery UI.
Something like 15 minutes after these attempts, the cluster actually scaled down a bit, but I’m not sure why.
When you change pool settings and a scale down is triggered, all pods on the future deleted node must be migrated to another one. Once it’s done, node can be deleted properly. This operation takes time.
Hence my question: what’s the recommended way to scale down a cluster?
The best way is to let the autoscaler do the job. For your information, the scale margin is 10% CPU, so if you’re node use 90%+ CPU a new node will be created. If pods running on a node can fit another one and leave it under 90% CPU usage, it will scaledown. This resource consumption is checked every minute and based on actual workload.
(Side question: is there any way to check nodes resource allocation?)
A feature coming in v3 will display all the informations you’ll need.In the meantime, if you’re using k9s you switch to node view :no
then do ctrl+w
if you don’t see resources columns. If you want more detail press enter
on a node the same shorcuts to display pod’s resources usage.If you’re using kubectl
and want a quick overview kubectl top nodes
is what you need. For a more detailled response use kubectl describe nodes
. Again, be careful, a wrong manipulation could break it all.