You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
cluster-autoscaler using image registry.k8s.io/autoscaling/cluster-autoscaler:v1.26.2
cluster-autoscaler using image registry.k8s.io/autoscaling/cluster-autoscaler:v1.29.3
What version of the component are you using?:
Server Version: version.Info{Major:"1", Minor:"29+",
Component version:
What k8s version are you using (kubectl version)?: registry.k8s.io/autoscaling/cluster-autoscaler:v1.29.3
kubectl version Output
$ kubectl version
Server Version: version.Info{Major:"1", Minor:"29+",
What environment is this in?:
AWS EKS
What did you expect to happen?:
The pods should be scheduled. But no new nodes are being created in the particular node pool.
What happened instead?:
pods of scheduled jobs are in a pending state since nodes are not available. Once we restart the cluster autoscaler nodes are being created
How to reproduce it (as minimally and precisely as possible):
I'm not sure how to reproduce this because this problem is only happening in two clusters. We have one more cluster, and it's working fine.
Anything else we need to know?:
Upgrade the cluster autoscaler from 1.26.2 to 1.29.3 in one of the clusters but still have the same issue. This is primarily seen in different node pools, and it's intermittent. We are shutting down clusters during off hours but not shutting down kube-system. This was working fine till last week.
The text was updated successfully, but these errors were encountered:
Which component are you using?:
What version of the component are you using?:
Server Version: version.Info{Major:"1", Minor:"29+",
Component version:
What k8s version are you using (
kubectl version
)?:registry.k8s.io/autoscaling/cluster-autoscaler:v1.29.3
kubectl version
OutputWhat environment is this in?:
AWS EKS
What did you expect to happen?:
The pods should be scheduled. But no new nodes are being created in the particular node pool.
What happened instead?:
pods of scheduled jobs are in a pending state since nodes are not available. Once we restart the cluster autoscaler nodes are being created
How to reproduce it (as minimally and precisely as possible):
I'm not sure how to reproduce this because this problem is only happening in two clusters. We have one more cluster, and it's working fine.
Anything else we need to know?:
Upgrade the cluster autoscaler from 1.26.2 to 1.29.3 in one of the clusters but still have the same issue. This is primarily seen in different node pools, and it's intermittent. We are shutting down clusters during off hours but not shutting down kube-system. This was working fine till last week.
The text was updated successfully, but these errors were encountered: