Running backup on demand for Preview Environments

We have a workflow that requires us to store a backup of the database on demand. It looks like I can use replibyte to easily perform the backup, and I was thinking I’d be able to set up a job and trigger it with the Qovery API to kick it off. The problem is, it looks like jobs must either be cron things that run on a schedule or lifecycle things that are triggered by at least one event (start, stop, or delete). My job should only be triggered by forcing it to run, and in fact running it either on a schedule or for any of those events could be a problem.

Is there a way for me to have a job like this that would run only when I specifically ask it to? I guess I could do something like use the API to:

  • Create a job that runs when the environment is deleted.
  • Use the deploy job API call to force it to run.
  • Then immediately delete the job so it won’t actually be run when the environment is deleted.

Would that work? Am I missing a better approach to this?

I guess another approach would be for me to create a Dockerfile that includes replibyte and also a simple web service that can be called to trigger the backup by calling out to it. Then, my application can call the service when a backup is needed.

Hi @aubrey ,

Before responding to all your other questions: did you look at this doc page > Seed Database | Qovery

I put @a_carrano since it’s something we discussed this week - we plan to add a way to skip execution of a declared service (lifecycle job or any other resource).

@rophilogene yep, and we have a seed process working with our preview environments now. Here’s the problem I’m trying to solve now:

We do live sales demos, and right now we have a single sales instance of our app. Because the sales team makes changes to things during the demo, every night the database gets reset to a backup we’ve made that contains the data we want for the demo, to reset those changes. Occasionally, the sales team wants to change the data intentionally, like to add new use cases. In this case they make the modifications they want in the app and then click a button to make the current data be the new standard that we reset to each night.

That last part is the problem I’m trying to solve here, I need to make a new backup of the database when the user (who is not in Qovery) instructs me to do so, and then I need to use that backup to seed the database when new sales environments are created. I think I know how to do all of the parts of this except for kicking off the creation of the new backup. Right now I’m working on making a small web service that calls out to replibyte to do this, and that seems to be going fine, but originally I thought I would solve it by manually executing a lifecycle job through the API.

Hi @aubrey ,

Super clear :ok_hand:

Ok :+1:

If you want to use a lifecycle job to kick off the backup of your primary database - it’s possible but you’ll have to choose the appropriate option depending if your database is accessible from Qovery or not.

Database not accessible from Qovery

In this case, creating an intermediary service that will expose an endpoint and execute Replibyte can be an option in that case.

Pro tips: you can do VPC peering with Qovery - so if your primary database is on AWS, you could connect your Qovery EKS cluster with your database VPC and then directly use a lifecycle job with Replibyte to run the backup.

Database accessible from Qovery

You can directly use a Lifecycle job with Replibyte and run the backup.

Does that make sense? Let me know if you want me elaborating more.

@rophilogene thanks for the update. I think I’ll be able to get this under control by writing a service that executes the backups, but I’m still back to my original question on how it could work with Lifecycle jobs. I don’t want to create the backup when the environment starts, stops, or is deleted. I only want to do it when the user requests it. I think I can figure this out, but it doesn’t look like the available job types are a good fit for that use case unless I’m missing something.

@aubrey I had the same dilema as you. What I did is run the backup in my sandbox environment as a cron job n times a day and save it to AWS S3. Then when I spin up my preview environment I have a lifecycle job that reads the latest backup in S3 and uses that to seed my preview environment database.

1 Like