We have cloned production environment to preproduction one. Then configure CNAME subdomain on both qovery and our provider. DNS CNAME is correctly propagated.
Deployment log of our application is succesfull for our application and router
But our website subdomain return 503 Service Temporarily Unavailable
I think I got the issue. Here is what I see: your old and new applications are using the same custom domain. Nginx side it’s not possible, so it’s using the first declared one (redirecting to the stopped instance).
Now here is what I’m guessing and would like your confirmation:
You removed the custom domain on Qovery from the old app
You did not redeployed your app
You stopped it instead (so your app is not live anymore but we keep nginx config to keep TLS certificates and other things that should not change)
You added the same custom domain to your new app
And this is why I see 2 declarations of the same domain. Please tell me if it’s what you did.
What I suggest you in this case is to redeploy your old app, so the custom domain will be released from the load balancer, then stop it again. Redeploy your new backend and it should be good.
Hi @Pierre_Mavro, was this issue resolved? We also started seeing “503 Service Temporarily Unavailable” on one of the applications, but we do not have 2 applications with the same domain as in this case. Are there any other conditions for this issue to appear? We do have 2 external domains on that particular application.
Everything looks ok; sites are accessible. If you encounter anytime you deploy on the first seconds/minutes, I encourage you to set custom liveness and readiness probes. More info here: Troubleshoot | Docs | Qovery
Yes it happens on the first minute or so. The initial delays are increased already. However, it’s weird the old application is kicked off before the new one is ready to accept requests. The delay would only help if the probes were failing.
From what you’re saying, I advise you to tune liveness and readiness probe, it will solve your problem. Kubernetes needs to know how your application works (when it’s up and running and when it can send traffic to it). Those elements will help you to solve your issue.
Hi @Pierre_Mavro, could you elaborate on what should we tune? AFAIK, Kubernetes should not terminate the existing pod until a new one has started and passed liveness and readiness checks. If there’s a problem with liveness/readiness probes, then Kubernetes will try restarting the new pod, and if not successful then eventually, deployment will crash. We don’t have these problems but occasionally see 503 errors. What’s even more interesting is that we don’t encounter these errors on every deployment.
Hi @bchastanier, our application is rather simple, and we used the default settings from Qovery. I now tweaked it to use HTTP instead of TCP to see if it makes any impact.
We check health by hitting the root path “/” which is a simple redirect. We are performing migrations before starting the server so the port is not getting opened right away, and the health check doesn’t pass initially. We had to increase the probe delay to allow extra time for migrations.