Unable to access cluster using kubectl

I have been able to access my cluster for a while but today I am no longer able to access it using kubectl.

I redownloaded my kube config following these steps again: How to connect to your EKS cluster with kubectl | Qovery

I am receiving this error:
kubectl get pods
I0814 11:08:58.821673 3072 versioner.go:58] exec plugin: invalid apiVersion “client.authentication.k8s.io/v1
error: You must be logged in to the server (Unauthorized)

If I change the version back to v1beta1 in the kube config like it was before I get this error:
kubectl get pods
I0814 11:12:17.393945 3213 versioner.go:58] the server has asked for the client to provide credentials
error: You must be logged in to the server (Unauthorized)

I’ve tried updating my aws credentials as well.

Hello @rjohnson,

Can you confirm the AWS user you are using to connect is part of “Admins” group in AWS IAM?


Yes, it is:

My access was working this morning and then just stopped.

Kube config looks like so:

apiVersion: v1
  - cluster:
      server: https://14B1A9A3E2D4819464CA63998CB9BA27.gr7.us-east-2.eks.amazonaws.com
      certificate-authority-data: ***
    name: aws_z0925b2a5
  - context:
      cluster: aws_z0925b2a5
      user: aws_z0925b2a5
    name: aws_z0925b2a5
current-context: aws_z0925b2a5
kind: Config
preferences: {}
  - name: aws_z0925b2a5
        apiVersion: client.authentication.k8s.io/v1
        interactiveMode: IfAvailable
        command: aws
          - "eks"
          - "get-token"
          - "--cluster-name"
          - "qovery-z0925b2a5"

Seeing this message at the top of the console:

Hey @rjohnson,

Your cluster is updating to reflect latest changes we introduced to switch from user to roles on AWS. It seems workers are still updating (you’ve got a lot of nodes). But it should be done shortly.
Once updated, you should get your access back.

Sorry for the incovenience :confused:


Great, thanks @bchastanier !

Should be good now @rjohnson !