You are right, there is a start / stop for services, this feature allow your services to be stopped on your cluster, but it stops services pods, not the EC2 machines (your cluster will scales down eventually).
But it seems what you are looking for is more a pause cluster feature. Stopping a cluster will stops the physical machines and all services running on it.
Okay, that answers the question of why EC2 instances were not shutting down. This brings me to the next question, what is the difference between Start & Stop and Deployment Rules? They seem to be doing the same thing, but at this moment the difference is not clear.
Also, the deploy script that you shared uses THIS_ORG_KEY and THIS_CLUSTER, but from looking at the screenshot above, it seems to be the CLUSTER_ID and ORG_API_KEY. I’ve clicked all over the place, but I do not see these variables anywhere to create an ALIAS like you did.
Will the cron job run even if the cluster is down?
Hey !
Indeed I forgot to mention but in our case, this setup is running on another cluster which is never stopped.
Such setup requires you to have an always on cluster in order to work as it relies on kubernetes scheduler.
I am not aware of any plan to use our scheduler allowing you to avoid having at least an always on cluster.
That being said, you can use any external tools to do this schedule and call Qovery api to pause / start clusters (gitlab CI or anything else).
Regarding deployments rules, what are you referring to start / stop ? Env / apps or clusters ?
As for environment variables you mentioned, it’s up to you to add those to the cron job responsible to start / stop so it va use it.
Indeed those two screens both refer to deployment rules.
The first one is at project level and specifies Configure your default deployment rules. Drag & drop rules to prioritize them. allowing you to declare default deployments rules to be add to any newly created envs.
Whereas the second one is at env level, allowing you to set / edit deployment rules for a given service.