Does it mean we are running 30 nodes across the full cluster and we are currently running full capacity thus we cannot replace nodes to deploy new ones?
We were able to solve by increasing to 50 nodes and restarting the cluster, but we would like to know for the future.
Hi @enzoferey , we’re about rolling Karpenter to your cluster once it’s stable (cf this thread) - this kind of issue will no longer be a problem. It’s due to the default Kubernete nodes autoscaler that does not properly schedule pods based on the real workload.
If you want to better understand what’s the issue you’re currently facing, I invite you to read this detailed thread.