We are trying to remove a cluster and getting the following error.
Unknown error while performing Terraform command (
terraform state rm aws_s3_bucket.loki_bucket), here is the error:
e[31m│e[0m e[0me[1me[31mError: e[0me[0me[1mInvalid target addresse[0m
e[31m│e[0m e[0me[0mNo matching objects found. To view the available instances, use “terraform
e[31m│e[0m e[0mstate list”. Please modify the address to reference a specific instance.
Cluster ID: 465f55d6-4744-44b9-983d-98713ced4c11
Exec. ID: a7aa8c02-7dd5-42b6-b3f6-907fa0b65531-1669386922
Org. ID: e2434c8e-0905-4969-9536-670f9a774f44
We renamed this cluster earlier today so not sure if that’s causing this but could you please check?
Hi @Rahul just saw the issue, I will have a look, sorry about that.
I found a bug on our end. I pushed a patch to fix this, should be live soon, but in the meantime, I can delete your clusters on my side, can you just share clusters IDs and names you want to delete?
@bchastanier great, thanks.
Cluster ID is 465f55d6-4744-44b9-983d-98713ced4c11 under AWS Dev/Staging (Old).
It’s currently updating so once that’s done you can go ahead and delete this cluster.
I’ve got a dependency error trying to delete your cluster, it’s a VPC it uses.
I don’t know if force deleting it on my side would have some side effect if other resources on your side use this VPC. Maybe you should remove the VPC connection first.
Please let me know.
Error: deleting Security Group (sg-093a76205cef26702): DependencyViolation: resource sg-093a76205cef26702 has a dependent object
status code: 400, request id: ff4c7406-9f35-424b-a833-ae2889ef5f27
Error: error deleting ElastiCache Subnet Group (elasticache-vpc-0e22e22e081d6e64f): CacheSubnetGroupInUse: Cache subnet group elasticache-vpc-0e22e22e081d6e64f is currently in use by a cache cluster.
status code: 400, request id: 39a0f676-6308-464b-b43a-9933c6d449d2
I am not sure if deleting this VPC
(ElastiCache Subnet Group (elasticache-vpc-0e22e22e081d6e64f) will cause an issue with our new Dev/Stag cluster as this particular VPC is also created by Qovery’s engine.
Basically, what we have done yesterday is removed our old clusters and added new ones with a static IP address, and made no other changes. I managed to remove our old production cluster without any issues so I am just not entirely sure why this particular VPC is causing troubles with the old staging cluster.
I will investigate this a bit more and see what went wrong.
@bchastanier Looks like we had an old Redis cluster sitting and linked to that specific VPC. I removed this old Redis cluster and disconnected the VPC connection.
For some reason, when I deleted our environments yesterday then this particular Redis Cluster wasn’t deleted from AWS so it all makes sense now. Perhaps, you guys might want to investigate this for future issues.
Finally, managed to delete the dev/staging cluster now so thank you for looking into this and for your help!
Hi @Rahul !
Thanks for the heads up
By the way, the patch preventing you to delete your cluster because of s3 bucket has been released yesterday, so should be good to go (your cluster was properly deleted so we are good).
Please do not hesitate if you need further help.