Hello, randomly we are starting getting a huge amount of error logs generating from a self managed GCP auto-pilot cluster with qovery. Errors are all from qovery namespace and looks like this:
Thanks for contacting us with your problem. Your cluster is self-managed, so we don’t have access to it, but I will try to help.
Looking at the log, this looks like a permission problem with Loki.
Can you check if the values you are using with the Loki Helm charts are right? Especially the serviceAccount.annotations.iam.gke.io/gcp-service-account.
It needs to have the role roles/storage.objectAdmin.
For managed clusters, we use this configuration:
project = project_name
role = "roles/storage.objectAdmin"
This is really strange because we do not have access to the Loki helm chart, it is in the configuration of the helm chart of qovery. Do I have to do something with the qovery helm chart? It started when I wanted to try to deploy a persisent volume claim on the cluster. Do you have any advise?
Hello @Nextools,
Are you using loki logging?
If not you can disable it in the config file, it is enabled by default but it seems it misses some configuration regarding the storage in your installation.
Perfect, re-applying the chart removed the other pods, but now I’m not able to see any live logs and also the service is showed as “STOPPED” but it is not. I think that this make sense since we removed loki (that is in charge of logging), so the idea is to have the logs enabled, but what was the original problem is the large amount of logs generated by something that I cannot identify.
Do you have any other suggestions?
Hi, would be great, unfortunately is not solved. The problem started when I deployed a container with a Persistent Volume Storage attached to it. I’m using a self managed Google Autopiliot K8S