Questions concerning databases, deployment strategy and containers

As requested I’ve copy pasted the conversation:

Hi there, a series of questions if you please.

  1. I see from the docs you can extend the startup time to account for db migrations; but if you are running more than one pod of the application they will conflict;

  2. is it possible to provide an init_container? or similar? the danger is that when you have a large database and several TB of data it is sometimes difficult to gauge how long a migration would take.

  3. in some cases devs need exec to get into the pods to have access to the database or run custom commands, is this possible?

  4. additionally; are dbs running in-cluster? and is it possible to use rather persistence layers outside of cluster? I imagine yes just need to manage the terraform/cdk yourself and provide access details.

  5. Are (and how if so ) env vars encrypted on the server side by default?

  6. Also pods and podspec allows for multiple apps to run within a logical pod (network), and in allot of my clients cases this is necessary… to what degree is this supported?

  7. Additionally regarding deployment, what kind of CD options are available/planned (Blue/Green, Canary, A/B/C Experiments?)

  8. Any support for mTLS and custom certificates?

thanks!

PS

good job on the platform btw… it feels really good… the cli is a little rough but probably just me needing to grok its features…

  1. keep mixing env up with env-vars
  2. also, why no project context? have to always provide -p and -a which sucks if you are used to kubectl

Hi @rosscdh, welcome to the Qovery community. I will respond to all your questions with the Qovery v2 in mind, as we deprecate the current version. You can register for the v2 here.

Indeed, it can be risky if you have more than one instance of your apps running and performing a DB migration at the same time. (cc @Pierre_Mavro @bguyot ) I cc’d my team to validate that we cover this edge case. To prevent any problem in the case of multiple app instances are running, I would recommend performing the DB migration manually.

In your app container you can put whatever you want (E.g init scripts). However, I understand the idea of having a specific way to manage the init script. Can you give more details of what you have in mind here? cc @bguyot

Not yet, but we have some users requesting this feature (you can vote here). Most of them are RAILS developers. Do you?

Databases are running in two different modes: development and production.

development mode: it is a Docker container instance with fast local storage attached to persist the data. It is compliant with the production mode and 50x cheaper. Recommended for development purposes.

production mode: it is a managed service provided by the cloud service provider. E.g if you required PostgreSQL while you run Qovery on AWS. Then Qovery will provide an RDS instance compatible with PostgreSQL.

What is your use case of “using persistence layers outside of your cluster”?

You can manage variables for your app in 2 ways:

  • Environment variables: They are not encrypted.
  • Secrets: They are encrypted and salted server-side.

Can you give more details of what you need to achieve? @Pierre_Mavro will respond to this question.

Today we support Rolling Update. We do plan to support more as soon as we have our v2 GA. We support in priority the deployment rules that make sense for our users.

Not yet, same as above, we do plan features that make sense for our users. Any requirements for you here?

Yes wording sucks :frowning: We are working on a new version of the CLI. I would love to have your feedback on it.


Out of curiosity @rosscdh, why are you interested in using Qovery as you seem to be quite experienced with Kubernetes and infrastructure-related stuff? :slightly_smiling_face:

Hi,

Interesting questions there :slight_smile:

I see from the docs you can extend the startup time to account for DB migrations; but if you are running more than one pod of the application they will conflict;

Several solutions exist:

  1. As Romaric suggested, you can run it from a node (with an API endpoint for example), so only one node will perform the migration. This is definitively the simplest solution.
  2. If you don’t have API endpoints or can’t do this way, you can use a lock mechanism (almost all databases can do it by default). Here is a classic scenario:
    2.1 When new pods are starting (older ones will remain alive until new ones have fully bootstrapped), the first thing they do is look if a migration is required or not, if not the boot.
    2.2 The very first pod starting will see the migration required and will take the lock. Others pods will wait until the lock is released. The pod having the lock will perform the migration.
    2.3 Once the lock is released, other pods will try to get it, one by one they will acquire it, check once again if a migration is required or not, and release it immediately as the migration has already been done

is it possible to provide an init_container? or similar? the danger is that when you have a large database and several TB of data it is sometimes difficult to gauge how long a migration would take.

I agree, this is why solution 1 is useful in this kind of scenario.

Also pods and podspec allows for multiple apps to run within a logical pod (network), and in allot of my clients cases this is necessary… to what degree is this supported?

Sorry but I’m not sure to get your question. Do you mean multiple containers in the same pod? If you could give an example, I’m sure it will help to better understand your needs.

Thanks

1 Like

Trying to reply but new users can only put 2 links in a post…

kc delete po -n demo commerce-695795997f-42jzl

Happy to give opinion; am one of these people that spend day on the cli; so workflow is important :slight_smile:

Out of curiosity @rosscdh, why are you interested in using Qovery as you seem to be quite experienced with Kubernetes and infrastructure-related stuff? :slightly_smiling_face:

Good question, while I sort-of know my way around a cluster; many of my clients and their dev-teams do not, and dont want to. So your offering is a nice balance between just works and decent dev workflow. :slight_smile:

@Pierre_Mavro I have just created an organization (startup) and woudl like to request access to your v2 offering?

Sure thing, can you contact me on Discord please? We give access to users as fast as we can - we do a smooth release to fix the first coming bugs, then we onboard more and more users.

kube 1.16 guys? I hope v2 is a little more up to date?

Hi @rosscdh ,

There are no difference between v1 and v2 on this aspect. We are aware about it and are currently moving to 1.17. Today and tomorrow, every clusters in 1.16 will be updated in 1.17. We’re going to continue to update up to 1.20 in the coming weeks.

1 Like