We have been preparing all the necessary updates to migrate your Kubernetes cluster from the current version (1.26) up to version 1.28.
Most of the preparatory work has been done and we wanted to give you more visibility on the plan we had in mind to migrate all the clusters, including ours
As usual, we will upgrade first every non-production cluster to give you the chance to verify that everything works fine for your environments before upgrading your production cluster. Make sure those are appropriately tagged as such.
This will give you the opportunity to verify that everything works as expected on your services with the new Kubernetes version (see the section below “Does the upgrade have any impact on my services?”).
Please note that the upgrade won’t be a big bang from 1.26 to 1.28, we have to upgrade the clusters through each version.
If there is any specific reason we should delay the upgrade of your cluster, please fill this form
We will keep updating this post and our status page with all the information about the upgrades.
For any questions, please comment directly within this thread!
Upgrade 1.26 → 1.27 - DONE
– Set default new cluster version to 1.27
– Migrate any cluster flagged as non-production to 1.27 in the qovery console (making sure to exclude any clusters having
testin their names)
– Migrate all the production clusters to 1.27
– Set default new cluster version to 1.28
Upgrade 1.27 → 1.28 - IN PROGRESS
– Migrate any cluster flagged as non-production to 1.28 in the qovery console (making sure to exclude any clusters having
testin their names). IN PROGRESS
– Migrate all the production clusters to 1.28
Each cloud provider has a limited number of supported Kubernetes versions and Qovery manages for you the upgrades!
More info on the supported Kubernetes versions by cloud provider:
- Services deployed via Qovery
Kubernetes manage the upgrades by automatically creating new nodes with the new version, migrating the pods on the new node and shutting down the old nodes. The upgrade might cause a very small downtime for your applications, if you want to avoid the downtime you should:
- set at least 2 instances for your applications (within the application settings ) so that at least 1 instance will be available to receive traffic.
- set the correct liveness/readiness probes (using the health checks section) so that the newly created instances of your service will receive traffic only when ready
- Please redeploy your services if it has not been redeployed since the
registry.image_retention_timecluster advanced setting. Otherwise your services won’t be able to start once the migration done, as the image won’t be available.
Please note that, even outside this migration period, we strongly advise you to apply the points above to ensure no service disruption during the deployment of your applications.
- Services deployed by yourself (via a helm chart)
For any service you have installed by yourself, please ensure they are compatible with Kubernetes new versions. You can test them by either creating a new cluster in the new version (when it will be the default one) or testing it on your non-production cluster when it will be upgraded.
We are upgrading the agent running on your cluster to adapt to the new Kubernetes version and changing how it retrieves kubernetes resources.
If you lose access to your service logs or status, the only thing you need to do is re-trigger the deployment of the environment to update the deployed information.
Triggering the deployment of only 1 service is enough to update the environment.
After the deployment is done, you will have again access to your service logs / status without any further action from your part.