Deployment stuck on "queued"

Issues information

Your issue

While trying to migrate from v1 to v2, I am in the process of updating all the properties of the project in a Dashboard, and I should be at a place where I am ready to test, yet nothing seems to trigger the build, and while trying to do it manually, or even remove the app, to try it again, I am constantly getting the “Specified action is not possible in this state: Start Cannot start deployment while environment is in WAITING state” message.

To be clear, when setting up the project initially, at some point it ran the build, but it was unsuccessful (as it would be, since it’s missing things). But now nothing seems to trigger the re-build and dashboard just shows “Queued” forever.

I also have deleted v1 project and there is no qovery.yml file in my repo anymore.

Dockerfile content (if any)

FROM mhart/alpine-node:16
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
CMD npm run start

@Erebe @Pierre_Mavro can you take a look? Thank you

1 Like

You pushed your code to github multiple times in a short time period , due to which your app gets stuck in queue,

Thanks @KartikBazzad, that’s interesting. Never had that issue with v1 and ideally that wouldn’t make sense, triggering a build can happen as many times as needed, I would hope.

But I guess what doesn’t add up is that it’s stuck in queue for a long time, fair enough it might have been 2-3 commits in a short period, but there are not many commits since I set up the app.

More importantly, if you at all know, what would be the solution, not the cause?

same thing happens to my apps to, they get stuck in queue for many days, try changing the Dockerfile to Buildpacks in app settings, that might remove the task queue. and v2 doesnt not need .qovery.yml file anymore,

1 Like

Ah gotcha, thanks for heads up, will try it now :slight_smile:
Yep, I removed .qovery.yml from the repo, sorry if it wasn’t clear in my OP.

Thanks for reporting, at the moment, it happens when someone push too many times and have long builds. Tasks are queued and it can take a very long time (depending on the number of them in the queue). We’re working on a solution as we’re aware it regularly happens. It will be deployed in a few weeks.

In the meantime, I’m flushing current tasks, it should be good in the coming minutes.

1 Like

Thanks for for looking into it promptly and for confirming the cause of this @Pierre_Mavro.

Hello @daniil,

Sorry for the delay, I have unstuck your environment (it was due to a bug internally that is now fixed). You are now free to use Qovery again.

Sorry again for the delay

1 Like

Much appreciated, @Erebe. Thank you for taking care of this.

To avoid creating a new topic, I am now having issues with deploying Redis instance getting this error:

useradd: Permission denied.
useradd: cannot lock /etc/passwd; try again later.
chown: invalid user: 'redis'
e[38;5;6mredis e[38;5;5m12:19:38.88 e[0me[38;5;2mINFO e[0m ==> ** Starting Redis **
1:C 20 Jul 2021 12:19:38.897 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 20 Jul 2021 12:19:38.897 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 20 Jul 2021 12:19:38.897 # Configuration loaded
1:M 20 Jul 2021 12:19:38.898 * Running mode=standalone, port=6379.
1:M 20 Jul 2021 12:19:38.898 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 20 Jul 2021 12:19:38.898 # Server initialized
1:M 20 Jul 2021 12:19:38.899 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
1:M 20 Jul 2021 12:19:38.899 * Ready to accept connections
2021-07-20T12:00:31Z Warning Unhealthy: Readiness probe failed: dial tcp 10.244.5.248:8080: connect: connection refused
2021-07-20T11:58:21Z Warning Unhealthy: Liveness probe failed: dial tcp 10.244.5.248:8080: connect: connection refused
 Warning FailedScheduling: 0/67 nodes are available: 59 pod has unbound immediate PersistentVolumeClaims, 8 node(s) exceed max volume count.
 Warning FailedScheduling: 0/67 nodes are available: 59 pod has unbound immediate PersistentVolumeClaims, 8 node(s) exceed max volume count.
2021-07-20T12:19:31Z Warning SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: load-balancer is not yet active (current status: new)
2021-07-20T12:19:31Z Warning SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID 53dcd249-11de-4a0d-9a01-4fc81b9b83ee: PUT https://api.digitalocean.com/v2/load_balancers/53dcd249-11de-4a0d-9a01-4fc81b9b83ee: 403 (request "07a80479-1dd4-4770-8c64-9a3fbc54ac25") Load Balancer can't be updated while it processes previous actions
2021-07-20T12:19:36Z Warning SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID 53dcd249-11de-4a0d-9a01-4fc81b9b83ee: PUT https://api.digitalocean.com/v2/load_balancers/53dcd249-11de-4a0d-9a01-4fc81b9b83ee: 403 (request "9e7d7e08-4d72-44b4-8fb7-34cb1e76d282") Load Balancer can't be updated while it processes previous actions
2021-07-20T12:19:57Z Warning SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID 53dcd249-11de-4a0d-9a01-4fc81b9b83ee: PUT https://api.digitalocean.com/v2/load_balancers/53dcd249-11de-4a0d-9a01-4fc81b9b83ee: 403 (request "aba5b5be-9019-453e-9e15-20044a3b14b5") Load Balancer can't be updated while it processes previous actions
2021-07-20T12:20:37Z Warning SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID 53dcd249-11de-4a0d-9a01-4fc81b9b83ee: PUT https://api.digitalocean.com/v2/load_balancers/53dcd249-11de-4a0d-9a01-4fc81b9b83ee: 403 (request "4e45465c-5b22-4154-8aff-160949a5ffd8") Load Balancer can't be updated while it processes previous actions
2021-07-20T12:21:58Z Warning SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID 53dcd249-11de-4a0d-9a01-4fc81b9b83ee: PUT https://api.digitalocean.com/v2/load_balancers/53dcd249-11de-4a0d-9a01-4fc81b9b83ee: 403 (request "656d8169-925b-4f73-a6d0-4a4340fe771a") Load Balancer can't be updated while it processes previous actions

I can’t think of anything that can be done on my end, since I just added a new DB instance, not much control that I am aware of.

Seems to be up now. For anyone who might have troubles, if this helps, I needed to add a PORT environment variable (that my app Express server starts on and expects) and ensure that it maps to the public port I added in application settings (Under ports tab), in my case it was 8080.

Also for some reason my DBs don’t seem to deploy as part of app, so I deployed both DBs (in my case Postgres and Redis) before deploying the app.

You can deploy everything (apps + dbs) if you deploy the environment in environment screen (coming soon). If you deploy the app from application screen, only the given app will be deployed.

1 Like