Questions: microservices, load balancers, security, cost reduction on AWS

I received an email from a user with a couple of super interesting questions. I wanted to share with you my answers (with his approval) because they can give you also some insights. Here is his message:

Hey there, I recently found about Qovery. I have a few questions about what is possible:

  • Microservices Can you have multiple apps running in a single environment? Say I have app1 (node), app2 (python), and app3 (java)
  • Load balancing Internal vs External Let’s say app1 will receive traffic through an external Load Balancer, but needs to communicate with app2 and app3, can we have the traffic internally through an internal load balancer between the apps?
  • Internal apps Can you have apps and resources (MySQL) be internal in a private subnet with no external access to them?
  • Spot Instances Can you configure apps to use Spot Instances in AWS?
  • Shutdown Easy way to turn off instances of test environments as a way of cost savings? Let’s say turn them on at 8 am and turn them off at 8 pm every day and off on the weekends.

First, I will talk about what we can do with Qovery v2 as we are about deprecating our v1. Let’s respond inline.

Q: Can you have multiple apps running in a single environment? Say I have app1 (node), app2 (python), and app3 (java).

A: Absolutely, you can have multiple apps within one single environment, and you can even create an environment from an existing one - E.g to create a development or staging env. You can take a look at this video showing how to put multiple apps within the same environment.

Qovery v2 beta - Create an env with mono repo apps and PostgreSQL — Watch Video

Q: Load balancing Internal vs External Let’s say app1 will receive traffic through an external Load Balancer, but needs to communicate with app2 and app3, can we have the traffic internally through an internal load balancer between the apps?

A: Absolutely, your apps within the same environment can communicate through the internal network or the external network if you want. It’s up to you and both are possible.

Q: Can you have apps and resources (MySQL) be internal in a private subnet with no external access to them?

A: Absolutely, it is even recommended to not expose your databases. However, we authorize it for many reasons. We plan to make that part more secure to provide a turn-key SSH bastion to let you get access to your running applications and services in a very secure and easy way. You can keep an eye on the progress of this feature here.

Q: Can you configure apps to use Spot Instances in AWS?

A: I let @Pierre_Mavro (CTO and co-founder @ Qovery) respond to this question :slightly_smiling_face:

Q: Shutdown Easy way to turn off instances of test environments as a way of cost savings? Let’s say turn them on at 8 am and turn them off at 8 pm every day and off on the weekends.

A: Absolutely, you can take a look at all the cost optimization features Qovery covers

Hope it helps :slightly_smiling_face:

Thanks @rophilogene for the answers.

I have a few more questions:

For Microservices, how are the resources shared? By this, let’s say I have an environment production and have app1, app2 and app3. I assume that for high availability Qovery will create three EC2 instances in different Availability Zones (using AWS lingo here). Will app1, app2 and app3 will be using those EC2 resources?

But a more interesting situation, how about if I want to have app1 and app2 run in a t3.micro instance and app3 in a c6g.large instance but still have the three apps in the same environment. Would something like this be posible?

Very much looking forward to trying Qovery v2

Qovery uses EKS (AWS Managed Kubernetes). We made architecture choices to make it resilient to any worker node (EC2) failure. You don’t have to worry about how your apps are balanced between the nodes. They will always be up, even if something bad happened at the infrastructure level.

Thanks for asking, not sure we support clusters with a different types of instances yet. @Pierre_Mavro @benjaminch can you confirm guys?

Hehe me too :smiley:

Hi!

Let’s complete the answers to questions :slight_smile:

Spot Instances Can you configure apps to use Spot Instances in AWS?

Not yet, but it’s definitively planned for this year!

But a more interesting situation, how about if I want to have app1 and app2 run in a t3.micro instance and app3 in a c6g.large instance but still have the three apps in the same environment. Would something like this be possible?

It’s not possible today, but definitively something we can do. Not so many users requested it yet. It could be a good fit to mix with spot instances by the way.

For Microservices, how are the resources shared? By this, let’s say I have an environment production and have app1, app2 and app3. I assume that for high availability Qovery will create three EC2 instances in different Availability Zones (using AWS lingo here). Will app1, app2 and app3 will be using those EC2 resources?

What Romaric is saying is exact. There is only one small corner case when you’re using EBS (Network disks), they are bound to only 1 zone. Meaning if you only have 3 nodes (1 in each zone), having a disk on zone A, if the zone is lost, the app won’t be able to recover until a node is free to run your app in this specific zone. We generally recommend using another kind of storage like database or s3 in that specific case to avoid such an issue.

Hi @Pierre_Mavro,

Thank you for the answers. Spot Instances would be a much appreciated addition to keep costs down, there are some a very good post about this: here and here.

Also being able to have different instance types in the same environment would be a useful addition when you have Microservices that have different computing needs.

Do you have any use cases requiring multiple types of instances?

@rophilogene Yes, the project we are working on the reporting side it would be more appropriate to use a computed optimized instance type for the reporting server, but our application apps can run with 2GB of RAM or less and are fine with general purpose instance types. We also run some heavy cron jobs, that would be best suited with an even larger compute optimized instance type than the reporting server.

I understand that there are some workarounds this, but it would be much better to have all under one environment specially between apps that consume other apps and you want all the traffic stay within AWS.

Hi,

I’ve seen all the progress done in the last months congratulations. For example preview environments that is very cool feature!

But something I have not seen yet is Spot Instance support and don’t see it on the roadmap either, is this something planned? In the post https://www.qovery.com/blog/how-to-reduce-your-aws-bill-up-to-60 it mentions Start and stop schedules but I don’t see any of mention of that in the docs.

Lastly, for preview environments, I see it is setup per app, so that means every pull request will create a preview environment? Is there a way to tell it what pull request I do want a preview environment? There will be plenty of PR that will not need the preview and it will be a waste of resources to have it spin always.

Hi @moisesrodriguez ,

Thank you for the heads up :slight_smile:

Here is the documentation.

Indeed, today a Preview Environment is created for every Pull Request. We plan to improve that part and make it configurable.

I put in cc @a_carrano @Florian_Lepont @Albane_Tonnellier our product managers.

1 Like

Hi @rophilogene,

I appreciate the quick response!

1 Like

You’re welcome @moisesrodriguez

Hi, I can’t seem to find the documentation about linking apps in the same environment. Is there a particular DNS name that should resolve to the internal IP address?

Hi @prki , can you please open another thread for your issue? Thank you

Hey there! Any news regarding spot instances?

Hi,

We do not have many requests for it actually. Do you have any specific use cases in mind or is it just for cost reasons?

Thanks

We were just looking for a way to host resource-heavy workloads for cheap and Qovery + spot instances would have been quite convenient here. :slight_smile:

I got it. An alternative would be to use usage.ai. It will automatically reserve/release EC2 instances based on your usage to optimize thee cost. You can save up to 50% of your EC2 bill.

Breaking news: we’ve recently made a partnership with them, next to come :wink:

3 Likes

We would also be interested in using spot instances for clusters in dev/non-prod environments. Is this still on the roadmap?

We can do this in our self-hosted EKS clusters using eksctl. For example:

managedNodeGroups:

  • name: my-new-asg
    instanceTypes: [“c5.2xlarge”, “c5d.2xlarge”, “c5n.2xlarge”, …]
    spot: true

It allows us to run much larger instances while minimizing cost for non-critical workloads.

Hi @kflavin ,

it is not in our roadmap but you can definitively add the idea here.

It makes complete sense to me for non-prod clusters but there is some work to be done on the product first before we can properly manage spot instances: we need to support first multiple nodeGroups definition and deployment targeting (this was already in our roadmap but no ETA yet)

To reduce your cloud cost, we have a partnership with Usage.ai. You can find all the details here

Thanks @a_carrano . I submitted the idea to your roadmap.