Troubles using pipeline feature

Relevant information to this issue:

Hello,

I’m trying to use the pipeline feature, but I noticed a strange behaviour.

I created a pipeline with 3 stages (databases, jobs, and apps).

Into my jobs stage, I deploy a container (magento-bw) that build assets for my app and upload them to AWS S3.
I use my entrypoint script to clone my repository and get my release, then I build my assets and upload them to S3 with gulp.

I don’t understand why my last stage begin to deploy before this job is over.
The container (magento-bw) status is deployed before the entrypoint execution and then
my app stage begin to deploy my app (magento-app) before the end of my job container’s entrypoint.

Am I missing something?

Hello @abouillis,

I confirm your understanding is correct. We wait every elements of stage is done before to move on to the next stage.
Something weird, from what I understand, it seems somehow your job is flagged as OK before it’s actually done. Do you mind sharing your Dockerfile for this job? It feels like there is a fire and forget task or background task running and docker doesn’t wait for it to finish before to shutdown.

Thanks

That container is also used to do some backups.
The main process is cron, so I don’t expose any port.
Maybe this is a clue…

Here is my Dockerfile:

FROM ubuntu:18.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update \
    && apt-get install -y gnupg tzdata \
    && echo "UTC" > /etc/timezone \
    && dpkg-reconfigure -f noninteractive tzdata

RUN apt-get update \
    && apt-get install -y openssl wget vim curl zip unzip git software-properties-common \
        supervisor sqlite3 xmlstarlet jq geoipupdate libexpat1-dev zlib1g-dev cron libatomic1 \
    && add-apt-repository -y ppa:ondrej/php \
    && apt-get update \
    && apt-get install -y php7.1-cgi php7.1-cli php7.1-dev \
        php7.1-mysql php7.1-pgsql php7.1-sqlite3 \
        php7.1-soap php7.1-json php7.1-curl \
    && apt-get install -y php7.1-gd php7.1-gmp php7.1-imap php7.1-mcrypt \
        php7.1-mbstring php7.1-zip php7.1-xml php7.1-memcached \
        php7.1-bcmath php7.1-intl php7.1-readline php7.1-xdebug \
        php-xdebug php-pear php-apcu php-memcached php-redis php-msgpack php-igbinary \
    && update-alternatives --set php /usr/bin/php7.1 \
    && php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
    && mkdir /run/php \
    && apt-get remove -y --purge software-properties-common \
    && apt-get -y autoremove \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
    && touch /var/log/remote.log

# install node
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get update
RUN apt-get install -y nodejs

# install node tools
RUN npm install -g grunt-cli@1.3.2

# install PostgreSQL 12
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" | tee  /etc/apt/sources.list.d/pgdg.list
RUN apt-get update
RUN apt-get -y install postgresql-12 postgresql-client-12
RUN apt autoremove

# install mydumper
RUN wget https://github.com/mydumper/mydumper/releases/download/v0.10.5/mydumper_0.10.5-1.bionic_amd64.deb
RUN dpkg -i mydumper_0.10.5-1.bionic_amd64.deb

# install php config files
COPY ./qovery/common/php/php.ini /etc/php/7.1/fpm/php.ini

# install release manager script
COPY ./qovery/common/scripts/release_manager.sh /var/www/

# crontab setup
COPY ./qovery/background-worker/crontab /etc/cron.d/magento
RUN chmod 0644 /etc/cron.d/magento
RUN crontab /etc/cron.d/magento

# install entrypoint script
COPY ./qovery/background-worker/entrypoint.sh /usr/bin/entrypoint
RUN chmod +x /usr/bin/entrypoint

WORKDIR /var/www

ENTRYPOINT ["entrypoint"]

Do you think there is a trick to make Docker wait?

May I ask what you have in the entrypoint? Anything like cron && tail -f /var/log/cron.log ?

Yes:

/usr/sbin/cron -f

But the container is marked as deployed long before this command.

The entrypoint script start with a config files setup (nothing special) and then process the assets building and upload to S3 (gulp build). This process is quite long (some minutes).

Is it normal this service has some critical errors?

** (mydumper:7579): CRITICAL **: 13:02:16.904: Error connecting to database: Can't connect to MySQL server on 'api-staging-phpapp.[...].eu-west-1.rds.amazonaws.com' (110)
Zend_Json_Exception: Decoding failed: Syntax error in /var/www/current/lib/Zend/Json.php:97

No it’s one of the cron jobs (backups) that ran at 3pm (not during deployment), I’m working on it.

What it’s weird is that I don’t have any logs from your container job for the last deployment. Are we sure the code is executed?
Is there a way to split this Dockerfile job and have a dedicated job for container build?

I’m quite sure there were logs during the last deployment.
I’ve just redeploy this container 2 times.
I had no logs except these (for the last deployment):

🏁 Deployment request 5ee4b372-f989-47ef-9541-c6a2f13fffdf-55-1686156876 for stage 1 `JOB DEFAULT` has been sent to the engine
⏳ Your deployment is 1 in the Queue
🚀 Qovery Engine starts to execute the deployment
🧑‍🏭 Provisioning 1 docker builder with 4000m CPU and 8gib RAM for parallel build. This can take some time
🗂️ Provisioning container repository z0a7ee6ce
📥 Cloning repository: https://github.com/pandacraft/magento.git to /home/qovery/.qovery-workspace/5ee4b372-f989-47ef-9541-c6a2f13fffdf-55-1686156876/build/z0a7ee6ce
🕵️ Checking if image already exist remotely 463661221592.dkr.ecr.eu-west-1.amazonaws.com/z0a7ee6ce:16198314709412108714-a99e06f12d97f928092ee54806f496ee7f0ef86e
🎯 Skipping build. Image already exist in the registry 463661221592.dkr.ecr.eu-west-1.amazonaws.com/z0a7ee6ce:16198314709412108714-a99e06f12d97f928092ee54806f496ee7f0ef86e
✅ Container image 463661221592.dkr.ecr.eu-west-1.amazonaws.com/z0a7ee6ce:7446692582338165391-a99e06f12d97f928092ee54806f496ee7f0ef86e is built and ready to use
Proceeding with up to 4 parallel deployment(s)
🚀 Deployment of Application `z0a7ee6ce` at tag/commit a99e06f12d97f928092ee54806f496ee7f0ef86e is starting: You have 1 pod(s) running, 0 service(s) running, 0 network volume(s)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
┃ Application at commit a99e06f12d97f928092ee54806f496ee7f0ef86e deployment is in progress ⏳, below the current status:
┃
┃ 🛰 Application has 1 pods. 0 starting, 0 terminating and 0 in error
┃
┃
┃ ⛑ Need Help ? Please consult our FAQ to troubleshoot your deployment https://hub.qovery.com/docs/using-qovery/troubleshoot/ and visit the forum https://discuss.qovery.com/
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Deployment of Application succeeded
❤️ Deployment succeeded ❤️
Qovery Engine has terminated the deployment

If you can, you can try by your own.

It seems the job is not executed, when launching a deploy, I can see the deployment is starting, but it claims the pod is already running:

Hence in pod logs, no trace of your container building:

If I restart the service, I do see logs from what I think is your container building:

Actually, it makes sense, sorry I didn’t saw that from the begining, but your magento_bw is a container (long running app) not a job, so it never stops.
Basically, you deploy it once, but in your case, the container is still running with no updates, so nothing is triggered.

I guess what you want in your case is a lifecycle job triggered on deploy =>

You can create such job, set it to be triggered when environment starts (deploy or redeploy), more info on lifecycle jobs here.

Let me know if it’s clear and if I can help you with the setup :slight_smile:

Cheers

I’ve seen this feature, but I’m going to have more applications in this environment, and from what I understand, the work will start for any application deployment. However, it should only be run when magento-app is deployed.

I don’t think it’s possible for the time being. But IMO, in your container building job, if the container already exists, docker build / push will have no effects, so even if trigger on deployments where magento-app didn’t changed, it should be good. Am I missing anything? Do you have any tag set for this container?

Ok, it make sens, so I will try this solution.
This a build mode from repository, I don’t think I can set a tag to my container image.

Just what I was thinking is, your container (the one you build in this job) can have a custom tag which is the commit id or anything related to the latest version of magento-app, so your script can detect if a build / push is needed, if the container with the tag already exists, nothing will happen.

How can I set this custom tag?

So just to understand, you have a script that basically:

  1. clone the repo (magento-app?)
  2. build assets
  3. upload those assets to s3

What you want is to prevent those 3 from happening if magento-app didn’t changed?

If so, maybe you can either create a manifest file on your S3 containing the magento-app commit ID, then your new workflow will be:

  1. clone the repo (magento-app?), get the commit ID
  2. read manifest file from s3 and get commit ID
  3. If commit ID are equals, then stop here, Otherwise move on to next step
  4. build assets
  5. upload those assets to s3
  6. update manifest file with new commit ID and upload it to s3

Maybe you can do a cleaner stuff tagging leveraging s3 object tagging, but the idea is the same.

Do it makes sense? Am I missing anything?

Yes it make sens, this can be a way. We use something similar with another hosting.
Thanks.

Ok ! Let me know if you manage to setup something like so or if you need any further help :slight_smile:

Hello,

As you suggested, I created a separated lifecycle job named “magento-assets-builder” wich should run on start event (so when a container is deployed), but despite my attempts, it never ran automatically…

Miss I something?

Hello !

From latest deploy logs, I can see this job ran, not sure to get your point.
It should be triggered at env deploy as you configured (config looks good).

What makes you think your job is not triggered automatically? What did you do and expect it ran but didn’t?

Cheers