Is this doable through Qovery? We have a scenario where the autoscaler is evicting our postgres database pod and causing consistency errors in our stack with another service we are using that relies on logical replication. We want to tell the autoscaler not to ever evict a postgres pod.
Indeed, there is no way for the time being to prevent autoscaler from evicting certain pods.
Indeed annotations might land at some point later on and might allow you to prevent this from happening.
Another feature which might be an option for you will be to use Karpenter and this post (a feature aiming to totally replace autoscaler, with specific annotations, you should be then able to prevent your pod from moving).
Just so you have the background around why we allow the postgres pods to move is to allow the autoscaler to properly optimize your nodes, otherwise, you can eventually waste a lot of resources if you have postgres pods spread across a lot of nodes preventing nodes to be freed. CF this post.
Also, I am assuming you are using container DBs, which are not meant for production, if you need to have something reliable, you should consider moving to managed database via RDS.
Also maybe using larger EC2 instances might help to reduce pod moves eventually (to be tested).
Thanks @bchastanier is there an ETA for a GA release on karpenter? I saw that post, but I’m hesitant to move to something on beta. Is that supported today on the Karpenter beta (preventing certain pods from evicting)?
This is for CI and regression environments. Our production database is not containerized.
Also worth noting: we use our own custom postgres image as a Qovery service, to support things like pg_cron and some other extensions we use. We aren’t using the Qovery provided database service.
Karpenter is still in early beta so you might still find a few issues. But we are already working on the annotations and yes, you will be able to configure those annotations on your applications (ETA end of the next month)
is it possible to temporarily disable the autoscaler in our staging cluster until custom annotations are available? We have a service we are using (Epsio) that does not play well with Kubernetes moving pods around, and it is causing some serious blockers to our regression process at the moment. I’m not sure of another solution for us at the moment.
Hi @Kyle_Flavin, there is no way per say at the moment to deactivate autoscaler unless you set min = max number of instances on your cluster. Doing this will prevent autoscaler to remove nodes. But keep in mind that nodes can still be switched eventually from the provider to perform some operations, but it should be happening rarely imo.