Deployment workflow for sandbox & staging environment

Hey there :wave:

Not really an issue but rather a question. We have 3 environments that are (or will) be configured and created through Terraform. We are only using Qovery with containers so there is no relation whatsoever to our Git provider.

Our workflow is as follows:

  • When a pull request is merged in develop, a :sandbox (or :develop) container is built and pushed.
  • When a pull request is merged in master, a :staging (or :latest) container is built and pushed.
  • When a tag is created (e.g. new release), a :tag container is built and pushed.

It seems that this workflow won’t work in Qovery because when I try to redeploy a container using the same image, it looks like nothing happen. Is it normal behavior? If yes, would it be possible to know the reason? How to do it?

Best,
Arthur

Hi @arthurecg ,

Are you trying to redeploy an updated container with the same tag? E.g. latest or something else?

Yes. Would have loved to use the Qovery CLI to deploy some tags but given we use terraform to setup our infrastructure, it would mess with the state as the version in Qovery and the version in terraform would differ. Moreover, it would be pretty painful to update our terraform build for each release in sandbox.

Indeed, it’s better to use Terraform and the CLI independently. We also have room for improvement here. We have some ideas, but it’s another thread.

Concerning using a static tag for your images, it’s not recommended since multiple layers of cache are involved on Qovery and Kubernetes. So an updated image with the latest tag cannot be what you expect since it’s per tag when an image is cached.

Here is a quick diagram to illustrate what I mean

I get it. So you would suggest updating terraform state even for “dev releases”?

In my previous xp. we were using Keel to do these kind of things. It was obviously not available in production for safety reasons. Each time we were redeploying a :develop or a :latest, it would automatically pull the image & restart the pods.

Would that be something possible within Qovery? I guess with this solution, I would have to deploy my app (so at least deployment & service given that in our configuration, we have a gateway sitting in front so no ingress needed) through Lifecycle Job.

I can see with our product team what we can propose here. A simple solution would be to provide an option to invalidate container image caching and then always pull the image. It will heavily hurt deployment performance, but maybe it’s ok for development.

I’m not sure to understand this question.

In the current scenario with our Terraform configuration, the ideal solution would be to apply a new terraform configuration every time we have something ready for sandbox or staging ?

Thanks for the clarification :pray:

Since you are deploying containers and not apps via Git, using Terraform to deploy your new app release makes sense. Then it’s homogenous. So you can have a GitOps flow with Qovery and only use Terraform to manage infrastructure and your app release process.

However, keep in mind that everything that you will modify from the UI or CLI (or API) will be erased by your Terraform at the next run if your manifest is not updated.

1 Like

I see, thanks!

Let me know if you need more input on our use case so you guys can decide if it’s something you want to do or not. :pray:

1 Like

If you can make a diagram of your deployment workflow that would be perfect to make sure we are aligned :+1:

Here you are :slight_smile:

1 Like

It’s perfect @arthurecg - thanks a lot :pray:

Hello!

We would love to have the option you imagine @rophilogene :

“A simple solution would be to provide an option to invalidate container image caching and then always pull the image. It will heavily hurt deployment performance, but maybe it’s ok for development.”

It would allow us to keep pushing commits on a dev branch, build and push a new docker image with the same tag (the branch name) to ECR (from github CI) and make Qovery automatically redeploy the new image.

What do you think @a_carrano?

We could add an advanced setting to manage the imagePullPolicy for the containers but it will come with some risks/problems:

  • In case of deployment failure we won’t be able to rollback automatically to a stable version (it’s still the latest version)
  • It might increase the deployment time since we always pull the image
  • It might cause a longer downtime: if a pod crashes, Kube will need to re-pull the image again instead of using a local copy

since it’s for dev purposes maybe most of these points are not an issue but it’s better to make them clearly stated.

Can you add this point in our public roadmap :pray:? I’ll discuss this internally as well. thanks!

1 Like