Support for a1.2xlarge

Hello @Qovery_Team
Are you able to support a1.2xlarge clusters on AWS?
Our application is more CPU intensive than memory and therefore we are running out of CPU allocations and needing to auto-scale.

Do you see issues using a1.2xlarge vs t3.xlarge we are using today?


at the moment, Qovery only supports x86_64 architecture. Since A1 is ARM, you won’t be able to use it.

Anyway, for high CPU usage, you can use c4, c5 or c6 instances.

Please let me know if you need further help.

Hey @Enzo ,
Problem is the higher cost to operate per node for the c4, c5, c6 instances.

a t3.xlarge costs 0.166 per hour (on-demand) and comes with 4 CPUS, 16GB RAM.

a1.2xlarge costs 0.20 and comes with 8vCPUS. that is why I wanted interested in leveraging. :slight_smile:

the c4.xlarge only comes with 4 CPUs, c4.2xlarge brings more CPUs but is double the cost of the t3.xlarge at 0.399, so in this case we are better off staying with our 4 node-cluster of t3.xlarge.