Fine grain control of resource requests and limits

Hi,

We noticed resource limits are set to match the resource requests. Is it possible to remove the limits? Also, in a low-traffic preview environment, the minimum of 0.25 vCPU per application/pod is a rather large minimum value. Is there a way to get more fine-grained control and set something like 0.025-0.05 for our staging/preview environments?

Hello,

You are right, we just released a change to allow in the API a minimum CPU of 0.01vCPU instead of the previous minimum of 0.25vCPU.

At the moment, it is only available API side, but we are going to make the change also in the UI.

1 Like

When can we expect it in the UI?

This is great for scheduling a lot of low-traffic pods but it will also amplify the issue of having them limited to a very little CPU even if the worker node doesn’t have much load on it. Is there a workaround for resource limits? Are they available via API for example?

Regarding limit, we are going to discuss it, but for now it is not planned to be able to remove them.

The reason, is that people are going to remove them, like in your case, for dev environment, and it is such environment that are the most likely going to introduce bug/resource over-consumption.
Without limit, a single test app can bring down a node to halt and the other app that was running also on it. If those other apps happen to be your production, it can be dramatic.

In our mind Qovery should not allow unsafe usage of a cluster.

P.s: for UI release, I would say in the incoming days. (You can use developer console of your browser to edit & re-send an api request in the meantime)

I get the argument, but you don’t have to remove the limit completely. If you gave us the option to increase it, we would be able to allow dev pods to “burst” while still keeping them capped at say 0.5 or 1 vCPU.

Hi @prki

As Romain said, bursting leads to instabilities, which you don’t want. You can lose nodes, lose application access, and it cannot be very pleasant for you.

We want to give the best experience possible with Qovery and propose a production-grade solution. We apply best practices to avoid such situations.

In the past, we tried to do so on a CI cluster on DigitalOcean to reduce cost. It was more unstable than expected even if the burst wasn’t so high. By experience, it’s a bad practice. We know how things will go with bursting, so we don’t want to propose it.

Bringing support in that condition is time-consuming for everybody, including customers and the team. At some point, you can do what you want because you can connect to your cluster and deploy jobs with what you want inside, but it will be hard for us to support you in this situation.

We’re always happy to bring functionalities, features, and solutions when it doesn’t hurt production stabilities.

Thanks for your understanding.

Pierre

Hi @prki ,

the UI has been updated and you can now set a minimum of 10 mvCPU (or 0.01 vCPU)

Hi @a_carrano,

Thanks for the update. We started using preview environments and we used low vCPU values to allocate more pods on our cluster. However, we are now facing an issue where our apps run very slow at 10 mvCPU even though the general cluster utilization is <10%.

I would like to bring back your attention into customizing resource requests and limits separately. Maybe it could be a feature for dev clusters only, or maybe you can hide it somewhere in the API but without it we are wasting a lot of resources.