Hi Team,
the “Add port” modal does not load for my application.
Hey @gangeshwark,
Can you please confirm your cluster version on GKE console please?
We tried to update your cluster this morning but this one is in error because the update tries to delete / recreate your cluster (action which we don’t allow).
CreateError - Error, validator `No destructive changes` (Prevent from resource destruction) has raised an error: Validation error on resource `google_container_cluster`: # google_container_cluster.primary must be replaced
Those are usually due to actions / modifications done outside of Qovery (most likely GCP console), and from what I see, your cluster seems to be running k8s version 1.29.
Usually users shouldn’t update Qovery managed resources outside of Qovery otherwise it creates configuration drifts which can be very dangerous and lead to breaking changes.
Thanks
Hi @bchastanier
Here’s the screenshot of the version I see on the console.
Looks like the cluster is running on v1.28.
I did not update our cluster outside of Qovery. A few weeks ago I tried to make some changes to the cluster via Qovery which resulted in the update that tried to delete and recreate my resources. Here’s the snippet of changes I tried to make. (See my next comment for the screenshot)
Please let me know how to resolve this issue. One of services is down because of this.
Thanks!
Hey @gangeshwark
Thanks for sharing this.
Would you mind sharing a screenshot of the k8s version on the GCP console?
You should have something like that:
Thanks @gangeshwark !
So indeed you cluster is running k8s 1.30.
We are not supporting this version yet unfortunately but are working on upgrading to version 1.29 before November and 1.30 right after.
As stated above, it seems an update has been done on your GKE cluster from 1.28 to 1.30 outside of Qovery. This should not be done because it creates configuration drift.
I cannot update / redeploy your cluster because it will recreate it (delete everything on it and create a new one in 1.28).
Without a new cluster redeploy, Qovery won’t be able to properly work and deploy services / apps on your cluster.
We strongly encourage you to create a new GCP cluster and move your workload to it (via cloning environments).
Otherwise, any new deployments of your apps will potentially break the app exposing (see this post).
Sorry about that,
Thanks for the information @bchastanier
I can recreate the cluster and do the migration but how do I prevent the new cluster from auto-updating to v1.30?
No, nothing, those are not automatically triggered automatically as far as know.
Can you check if anyone in your organization triggered it?
Maybe you should be able to find the trigger into Logs Explorer?
Actually, I am checking @gangeshwark, there is something regarding the channel, we are creating cluster using standard one, maybe we should use the stable one.
Let me come back to you.
Thanks @bchastanier I look forward.
To answer your question, no none of my team members triggered the update. I also quickly scanned the logs and I can not find any manual triggers. Maybe it happened long time ago - unclear for now.
So after deeper digging, I found the culprit … GKE autopilot does automatically upgrade clusters to next k8s versions.
By default, we are using the GKE STANDARD channel.
We will change that to use the STABLE channel instead so it will buy us more time to do the releases when we plan to.
We will update GKE clusters to reflect this change.
So again for the confusion.
Let me know if your new cluster solved the issue.
Cheers
@bchastanier Unfortunately, the new cluster was created on the regular channel instead of stable and got auto-upgraded from 1.28 to 1.29.
But for now the deployments work but I worry that I may face the auto-upgrade issue to v1.30 again. Any way to stop upgrade? I have disabled maintenance window on this cluster.
Hey @gangeshwark !
We are working on this topic. I will flip by default the channel to stable version which will by us some time and allow us to migrate faster than GKE.
Be aware that the migration is done only if everything on the cluster is working with new version.
Everything should be ok, but I’ll keep an eye on your cluster.
Please let me know if you see anything weird in the meantime.
Cheers