Hello,
Our account was running fine on 3 t3.large nodes, suddenly we got auto-scaled to 5 t3.large nodes.
I went and upgraded our node settings to t3.xlarge, however, we are still at 5 nodes in our cluster, which is very expensive for us.
Why is this happening? This is. a dev cluster, so we are not really doing much there to explain the auto-scaling.
If all of those nodes have some applications running on them, Kubernetes is not going to kill them in order to reduce the pool size of nodes. It can happen if the number of apps grew and after shrink back.
If you are sure you can run on 3 nodes, you scale force the max size of your cluster to 3 and re-deploy it. It will force the removal of the extra nodes, and force re-schedule all your apps on the 3 nodes