Kube System Pods are down in Qovery

Getting the error below in my kube system logs (metrics server and external dns)

Failed to load logs: Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)

Reason: InternalError (500)

Hi, do you manage Kubernetes deployments on your own (with kubectl/helm…) or 100% with Qovery?

It looks like you lack resources here.

Looking at your cluster, nodegroups looks to be in a bad shape. Did you recently update things on your AWS interface? Changed your AWS access key? ore reset Qovery user access key? Here is what I see:

Looking at your EKS nodegroup, I can see this from your AWS account:

        "health": {
            "issues": [
                    "code": "AccessDenied",
                    "message": "The aws-auth ConfigMap in your cluster is invalid.",
                    "resourceIds": [

Also, I see 3 persons added to the Admins group. Is it possible that one of them has his own account disabled or something like that? Or did you update permissions to the Qovery account?

Note: the aws-auth ConfigMap is regularly regenerated by Qovery, and looking into it, at first signs, it looks correct. So I’m guessing about things that have recently changed (permissions or users)

100% with Qovery


yes I did update the the configmap using kubectl edit configmap aws-auth -n kube-system -o yaml command

Ok, what was the purpose of this update? What do you try to achieve?

good question !!!

first of all - i have reverted my changes and i think it has fixed the issue.

back to question - I want to be able to give other team members access to view the EKS configuration on AWS GUI console.

the way i have done this before is to add the users role arn using kubectl configmaps - but for some reason each time i add the users roles, the yaml file does not like my changes

Ahah yes, I see. As said above, we have a tool, that regularly updates the config. So you shouldn’t have to update this file manually otherwise it will conflict.

My suggestion for this is to use the Admins group. Add everyone you want to give access to, inside this group, wait 5 min and it should be ok.

More info: Amazon Web Services (AWS) | Docs | Qovery

what tool is this that are using for this task ?

Not 100% sure about which task you’re talking about.

If you’re talking about the Kubernetes screenshot, I’m using k9s. Otherwise it’s AWS CLI

i have followed this documentation and it seems to have done the trick

so basically from my understanding - Qovery does not want you to modify the kube system configmaps using kubectl. Qovery rather expects you create some IAM groups and policies to implement this capability

Exactly, otherwise, it’s impossible to maintain automatically (due to conflicts). This is why we’ve made this “Admins” group.

Managing this content directly from Qovery is something we will want to leverage in the future (for example with SSO).