Hello,
We are trying to delete our staging cluster. It shows the following error:
But we don’t have any environment running except Production.
URL: https://console.qovery.com/organization/7b2c7fcd-6cc9-4c61-8647-8bf81e4fcda9/clusters/general
Hello,
We are trying to delete our staging cluster. It shows the following error:
But we don’t have any environment running except Production.
URL: https://console.qovery.com/organization/7b2c7fcd-6cc9-4c61-8647-8bf81e4fcda9/clusters/general
Note: as we can’t delete the cluster, we are currently paying for a non-used cluster + all the AWS resources like RDS
Hello @Francois,
I’m taking a look
You can delete your cluster now, it should be good.
There was an environment still present “staging-new” but it has been marked as deleted (every service inside has been deleted). As the environment was marked as “DELETED” we don’t display it anymore in the console.
This is an issue we should resolve in the coming days.
Thanks!
Now the cluster deletion is broken & stuck…
It has been 1h30 since I started it
Logs say: Infrastructure 'Staging cluster AWS-DEV (zace6887d)' deletion is in progress...
Hello @Francois,
On deletion, we delete every namespace inside your cluster.
Looking at it, there seems to be some custom components installed that are waiting for finalizers, e.g here it is stuck with argocd:
Thanks for the clear explanations, will check it out
Solved, and cluster deleted
Thanks @Melvin_Zottola
For anyone reading this: on cluster deletion, the namespace got emptied of its controllers before the controller had time to properly clean its resources of its finalisers. Causing this “stuck loop” ^^
For Argocd, controllers got deleted before the Application Custom resources.
For KEDA, it was a bit more complicated as external.metrics.k8s.io/v1beta1
was missing.
More info about it here: The Hidden Dangers of Terminating Namespaces
Thanks very much for the feedback @Francois
I maybe ~talked~ wrote a bit too fast
There were only a few pods left in 1 namespace (kube-system), so I assumed the delete button would just clean up the empty cluster and clicking it was enough:
But it turned out it is now installing more pods (aws-node), kube-proxy, etc
And CoreDNS is failing to start: MountVolume.SetUp failed for volume "config-volume" : configmap "coredns" not found
.
(Not sure why it tries to install more things while trying to delete? Maybe it does a last apply before the destroy?)
Indeed the “delete” action does a re-install + destroy.
So you should see the cluster being destroy after everything is well created on kube side
From your cluster delete logs, the terraform destroy
is running so it is now in the destroy phase
Hum it seems stuck still
No nodes are being created to run coredns
We had some resources installed as well in the Qovery VPC on our AWS account that blocked the cluster deletion (a redis).
Cluster is now deleted
Closing the thread
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.