My team and I are mostly interested in Qovery for the ease of use (<3). It allows us to benefit from both the reliance of AWS and the DX of other platforms like Digital Ocean and we love it. We use containers for all of our apps but we don’t know much about kubernetes, which is another reason we use Qovery instead of setting up the infrastructure ourselves.
Saddly, we see incredibly high resources usage on Qovery compared to our previous platform. For instance, our NodeJS API used 300mb of RAM on Digital Ocean and requires 8GB of RAM on Qovery.
I understand that Qovery relies on Kubernetes and that some memory is required to handle that, but I would like to understand how that can lead us to a x25 memory usage ? We are currently using K3S en EC2, could that be the cause ? We are thinking about moving to EKS with Qovery but I am afraid of having to pay 3 x 8GB instances to run one single Node JS API, would that be the case ?
We might mis-understand how Qovery and Kubernetes work, apologies if this is not the right place to ask these questions.
Thanks for choosing Qovery, and it’s great to hear that you’re enjoying our platform! We’re always here to help, and I’d be happy to explain how Qovery works with Kubernetes and address your concerns about resource usage.
Regarding the high resource usage you’ve observed, Qovery does indeed install several dependencies to provide a comprehensive and seamless experience for our users. Some of these dependencies include Cert Manager (for SSL/TLS certificate management), Loki (for log aggregation), and Promtail (for log collection). These open-source services are essential for Qovery to function optimally and provide the features you expect from our platform.
While these services do consume CPU and RAM, they have been heavily optimized to use as little resources as possible. However, it’s important to note that the overall resource consumption on Qovery might be higher compared to your previous platform due to the additional services and features we provide.
Those dependencies should not represent more than 2GB of RAM and 2vCPU from what I remember (I need to confirm with the team - cc @a_carrano@benjaminch).
@AxelDeSutter Could you please tell me which EC2 instance type you are using and tell me how many applications you are running on it? Can you also confirm the CPU and RAM you allocate to your apps running on it?
As you can see in the diagram below, your t3a.medium instance already has 0.5 CPU, and 2.5GB of RAM is used by some internal services (as explained above). This is already optimized and can’t be lowered, unfortunately. So your allocable CPU is 1.5, and your allocable RAM is 1.5GB. So what does that mean for your application when you deploy it the first time? It means this
Once your app is deployed, your app consumes 0.5 CPU and 1GB of RAM of this t3a.medium instance. So now the free resources are 1 CPU and 0.5GB of RAM. The problem is that when you deploy a new app version, we will use the Rolling Update strategy deployment method of Kubernetes. This means we will need 0.5 CPU more and 1GB of RAM during the update! But we only have 0.5 GB of RAM available .
You reduce the memory consumed by your application to ensure that 2 instances can run in parallel during the update. It can’t exceed 1.5 GB of RAM in total. So maybe 512MB of RAM would be great?
To have more RAM, you need to upgrade your t3a.medium instance into a t3a.large instance.
I have a small workaround for you to stick to a t3a.medium instance with 1GB of RAM allocated to your app. Stop your app before an update and start with the new version. It’s not ideal at all, but at least it works.
Thank you very much for this long, detailed and super valuable answer Romaric ! Really happy to see that your support follows the same standard as your software: excellence
Can I ask you one last question ? In an EKS cluster, do these services run on each instance or only on one node ? I understood the issue with the rolling update but maybe having multiple nodes can solve that, even a smaller instance ?
It’s a good question. By the way, I think you might find this thread very interesting. I took the time to explain how resource allocation work on Kubernetes.
To respond to your question, on an EKS cluster, your apps will be scheduled on one or multiple nodes - in a nutshell; you will not get this kind of issue even with smaller instances. However, at minimum, we still recommend using instances with 2CPU and 4GB of RAM.