[NodeCreationFailure] Fail on Cluster Creation


We tried to create a new cluster in the last 2 days. And it never work for us.

Always return the generic error message below.

We don’t know what we’re missing here.

We check our AWS quota on EC2. I think we have more than enough.

NOTE that: we’re currently having 3 active clusters. And we’re trying to create the forth one.

Our AWS account dedicate only to these clusters. We’re not sharing this account to other things.

And whenever it fail. We have to login into AWS to delete those zombie nodes, clusters, and VPCs manually.

Thank you for the help.


Can you give me the console url of your organization/cluster so I can take a look at it ?


Here is url


FYI, Because the installation already failed. So we manually deleted resources associated with this cluster already.

Thank you.

Thanks you, I am taking a look

Hi back,

So I took a look at it, and I don’t have more information than what AWS is giving us with “Instances failed to join the kubernetes cluster”.

This sometimes happens because of the instance type selected, or because there is not enough machine available of this instance type in given region.

But on our side, we can’t do much. I recommend you to either try to change the instance type and trigger a re-deployment, or create a ticket to AWS support in order to ask them to investigate the issue.

I am deleting all AWS ressources for this cluster for you in the meantime.

But the installation process did create bunch of nodes for us. So AWS must have enough resource for that instance type right?

It actually created 6 nodes of that type. And we got to clean up manually.

Wondering if potentially Elastic IPs limit per region could be the problem here?

We resolved the above. Everything working now. Thanks for the help.

1 Like

Curious what the fix was for this, we are running into the same issue creating a second AWS cluster.