Into my jobs stage, I deploy a container (magento-bw) that build assets for my app and upload them to AWS S3.
I use my entrypoint script to clone my repository and get my release, then I build my assets and upload them to S3 with gulp.
I don’t understand why my last stage begin to deploy before this job is over.
The container (magento-bw) status is deployed before the entrypoint execution and then
my app stage begin to deploy my app (magento-app) before the end of my job container’s entrypoint.
I confirm your understanding is correct. We wait every elements of stage is done before to move on to the next stage.
Something weird, from what I understand, it seems somehow your job is flagged as OK before it’s actually done. Do you mind sharing your Dockerfile for this job? It feels like there is a fire and forget task or background task running and docker doesn’t wait for it to finish before to shutdown.
Is it normal this service has some critical errors?
** (mydumper:7579): CRITICAL **: 13:02:16.904: Error connecting to database: Can't connect to MySQL server on 'api-staging-phpapp.[...].eu-west-1.rds.amazonaws.com' (110)
Zend_Json_Exception: Decoding failed: Syntax error in /var/www/current/lib/Zend/Json.php:97
What it’s weird is that I don’t have any logs from your container job for the last deployment. Are we sure the code is executed?
Is there a way to split this Dockerfile job and have a dedicated job for container build?
I’m quite sure there were logs during the last deployment.
I’ve just redeploy this container 2 times.
I had no logs except these (for the last deployment):
🏁 Deployment request 5ee4b372-f989-47ef-9541-c6a2f13fffdf-55-1686156876 for stage 1 `JOB DEFAULT` has been sent to the engine
⏳ Your deployment is 1 in the Queue
🚀 Qovery Engine starts to execute the deployment
🧑🏭 Provisioning 1 docker builder with 4000m CPU and 8gib RAM for parallel build. This can take some time
🗂️ Provisioning container repository z0a7ee6ce
📥 Cloning repository: https://github.com/pandacraft/magento.git to /home/qovery/.qovery-workspace/5ee4b372-f989-47ef-9541-c6a2f13fffdf-55-1686156876/build/z0a7ee6ce
🕵️ Checking if image already exist remotely 463661221592.dkr.ecr.eu-west-1.amazonaws.com/z0a7ee6ce:16198314709412108714-a99e06f12d97f928092ee54806f496ee7f0ef86e
🎯 Skipping build. Image already exist in the registry 463661221592.dkr.ecr.eu-west-1.amazonaws.com/z0a7ee6ce:16198314709412108714-a99e06f12d97f928092ee54806f496ee7f0ef86e
✅ Container image 463661221592.dkr.ecr.eu-west-1.amazonaws.com/z0a7ee6ce:7446692582338165391-a99e06f12d97f928092ee54806f496ee7f0ef86e is built and ready to use
Proceeding with up to 4 parallel deployment(s)
🚀 Deployment of Application `z0a7ee6ce` at tag/commit a99e06f12d97f928092ee54806f496ee7f0ef86e is starting: You have 1 pod(s) running, 0 service(s) running, 0 network volume(s)
┃ Application at commit a99e06f12d97f928092ee54806f496ee7f0ef86e deployment is in progress ⏳, below the current status:
┃ 🛰 Application has 1 pods. 0 starting, 0 terminating and 0 in error
┃ ⛑ Need Help ? Please consult our FAQ to troubleshoot your deployment https://hub.qovery.com/docs/using-qovery/troubleshoot/ and visit the forum https://discuss.qovery.com/
✅ Deployment of Application succeeded
❤️ Deployment succeeded ❤️
Qovery Engine has terminated the deployment
Actually, it makes sense, sorry I didn’t saw that from the begining, but your magento_bw is a container (long running app) not a job, so it never stops.
Basically, you deploy it once, but in your case, the container is still running with no updates, so nothing is triggered.
I guess what you want in your case is a lifecycle job triggered on deploy =>
I’ve seen this feature, but I’m going to have more applications in this environment, and from what I understand, the work will start for any application deployment. However, it should only be run when magento-app is deployed.
I don’t think it’s possible for the time being. But IMO, in your container building job, if the container already exists, docker build / push will have no effects, so even if trigger on deployments where magento-app didn’t changed, it should be good. Am I missing anything? Do you have any tag set for this container?
Just what I was thinking is, your container (the one you build in this job) can have a custom tag which is the commit id or anything related to the latest version of magento-app, so your script can detect if a build / push is needed, if the container with the tag already exists, nothing will happen.
As you suggested, I created a separated lifecycle job named “magento-assets-builder” wich should run on start event (so when a container is deployed), but despite my attempts, it never ran automatically…