Terraform unable to update cluster settings

I am trying to update the instance types (nodes) of my cluster by running terraform.

Below is an output of my terraform plan - which is self expalantory

  # module.qovery_cluster.qovery_cluster.my_cluster will be updated in-place
  ~ resource "qovery_cluster" "my_cluster" {
      ~ id                = "XXX" -> (known after apply)
      ~ instance_type     = "T3_LARGE" -> "T3_MEDIUM"
      ~ max_running_nodes = 5 -> 10
      ~ min_running_nodes = 3 -> 5
        name              = "XXX"
        # (7 unchanged attributes hidden)

Plan: 0 to add, 1 to change, 0 to destroy.

When i then try to apply the terraform, I get the error below:

module.qovery_cluster.qovery_cluster.my_cluster: Modifying... [id=XXX]
│ Error: Error on cluster update
│   with module.qovery_cluster.qovery_cluster.my_cluster,
│   on modules/qovery/cluster/infra.tf line 25, in resource "qovery_cluster" "my_cluster":
│   25: resource "qovery_cluster" "my_cluster" {
│ Could not update cluster 'XXX', unexpected
│ error: 400 Bad Request
Operation failed: failed running terraform apply (exit 1)

Why is this the case??

fyi - updating the cluster from the Qovery GUI console works btw.

Hello @dugwa

Can you tell me what version of the provider you are using and the cluster id you wanted to update ?

@bilel - i am using the following providers

terraform {
  required_providers {
    qovery = {
      source  = "qovery/qovery"
      version = "0.5.2"
  required_version = ">= 1.2"

Could you try upgrading to the latest version v0.8.0 ?
We introduced some breaking change on the api regarding how we create / update cluster. More specifically regarding the kubernetes mode of the cluster, which is now a required field ‘kubernetes_mode’.
This has been fixed in the v0.6.0 of the provider.


the documentation actually says that kubernetes mode is optional.

The default value is MANAGED - this is what we want right ?

upgarading the plugin has resolved this issue

Yes you’re right, it is optional with the new version because it sends the default value ‘MANAGED’ to the api on versions above 0.6.0.
However on older versions this field wasn’t sent to the api which caused the 400 error.

In your case you want to use MANAGED.
The other value K3S should only be used if you want to run your cluster on an EC2 instance using K3S.
This is a new feature which is still in beta so we do not recommend it for production usage.

I’m glad this fixed the issue for you :slight_smile:

1 Like