Skip to content

Can't use mutil targets on helm chart #248

@johnitvn

Description

@johnitvn

installed it using the command below

helm upgrade --install kube-system-autoscaling cluster-proportional-autoscaler/cluster-proportional-autoscaler  \
--labels=catalog.cattle.io/cluster-repo-name=cluster-proportional-autoscaler  \
--namespace kube-system \
--create-namespace \
--wait \
-f - <<EOF
image:
  tag: v1.9.0
config:
  ladder:
    nodesToReplicas:
      - [ 1, 1 ]
      - [ 2, 2 ]
      - [ 3, 2 ]
      - [ 7, 3 ]
      - [ 9, 5 ]
    includeUnschedulableNodes: false
options:
  target: "deployment/coredns,deployment/metrics-server"
resources:
  requests:
      cpu: "50m"
      memory: "12Mi"
  limits:
      cpu: "50m"
      memory: "24Mi"
serviceAccount:
  name: kube-system-autoscaling
EOF

The deployment output (kubectl get deploy -n kube-system kube-system-autoscaling-cluster-proportional-autoscaler -o yaml )

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: kube-system-autoscaling
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2025-03-07T21:27:55Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: kube-system-autoscaling
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: cluster-proportional-autoscaler
    app.kubernetes.io/version: 1.8.6
    helm.sh/chart: cluster-proportional-autoscaler-1.1.0
  name: kube-system-autoscaling-cluster-proportional-autoscaler
  namespace: kube-system
  resourceVersion: "31834"
  uid: 209725cd-0345-4000-b44a-688eb91d3c27
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: kube-system-autoscaling
      app.kubernetes.io/name: cluster-proportional-autoscaler
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: kube-system-autoscaling
        app.kubernetes.io/name: cluster-proportional-autoscaler
    spec:
      containers:
      - args:
        - --configmap=kube-system-autoscaling-cluster-proportional-autoscaler
        - --logtostderr=true
        - --namespace=kube-system
        - --target=deployment/coredns,deployment/metrics-server
        - --v=0
        - --max-sync-failures=0
        image: registry.k8s.io/cpa/cluster-proportional-autoscaler:v1.9.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: cluster-proportional-autoscaler
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 50m
            memory: 24Mi
          requests:
            cpu: 50m
            memory: 12Mi
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kube-system-autoscaling
      serviceAccountName: kube-system-autoscaling
      terminationGracePeriodSeconds: 30
status:
  conditions:
  - lastTransitionTime: "2025-03-07T21:27:55Z"
    lastUpdateTime: "2025-03-07T21:27:55Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2025-03-07T21:27:55Z"
    lastUpdateTime: "2025-03-07T21:27:55Z"
    message: ReplicaSet "kube-system-autoscaling-cluster-proportional-autoscaler-7848688747"
      is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  observedGeneration: 1
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

And the pod logs

I0307 21:28:30.752826       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns,deployment/metrics-server
E0307 21:28:31.155552       1 autoscaler.go:52] target format error: deployment/coredns,deployment/metrics-server

I already try versions: 1.9.0, 1.8.9, 1.8.6 (default in chart for now) the result is the same

Metadata

Metadata

Assignees

Labels

lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions