Code Monkey home page Code Monkey logo

hpa-operator's Introduction

Horizontal Pod Autoscaler operator

You may not want nor can edit a Helm chart just to add an autoscaling feature. Nearly all charts supports custom annotations so we believe that it would be a good idea to be able to setup autoscaling just by adding some simple annotations to your deployment.

We have open sourced a Horizontal Pod Autoscaler operator. This operator watches for your Deployment or StatefulSet and automatically creates an HorizontalPodAutoscaler resource, should you provide the correct autoscale annotations.

Autoscale by annotations

Autoscale annotations can be placed:

  • directly on Deployment / StatefulSet:
 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   name: example
   labels:
   annotations:
     hpa.autoscaling.banzaicloud.io/minReplicas: "1"
     hpa.autoscaling.banzaicloud.io/maxReplicas: "3"
     cpu.hpa.autoscaling.banzaicloud.io/targetAverageUtilization: "70"
  • or on spec.template.metadata.annotations:
 apiVersion: extensions/v1beta1
 kind: Deployment
 ...
 spec:
   replicas: 3
   template:
     metadata:
       labels:
         ...
       annotations:
           hpa.autoscaling.banzaicloud.io/minReplicas: "1"
           hpa.autoscaling.banzaicloud.io/maxReplicas: "3"
           cpu.hpa.autoscaling.banzaicloud.io/targetAverageUtilization: "70"

The Horizontal Pod Autoscaler operator takes care of creating, deleting, updating HPA, with other words keeping in sync with your deployment annotations.

Annotations explained

All annotations must contain the autoscaling.banzaicloud.io prefix. It is required to specify minReplicas/maxReplicas and at least one metric to be used for autoscale. You can add Resource type metrics for cpu & memory and Pods type metrics. Let's see what kind of annotations can be used to specify metrics:

  • cpu.hpa.autoscaling.banzaicloud.io/targetAverageUtilization: "{targetAverageUtilizationPercentage}" - adds a Resource type metric for cpu with targetAverageUtilizationPercentage set as specified, where targetAverageUtilizationPercentage should be an int value between [1-100]

  • cpu.hpa.autoscaling.banzaicloud.io/targetAverageValue: "{targetAverageValue}" - adds a Resource type metric for cpu with targetAverageValue set as specified, where targetAverageValue is a Quantity.

  • memory.hpa.autoscaling.banzaicloud.io/targetAverageUtilization: "{targetAverageUtilizationPercentage}" - adds a Resource type metric for memory with targetAverageUtilizationPercentage set as specified, where targetAverageUtilizationPercentage should be an int value between [1-100]

  • memory.hpa.autoscaling.banzaicloud.io/targetAverageValue: "{targetAverageValue}" - adds a Resource type metric for memory with targetAverageValue set as specified, where targetAverageValue is a Quantity.

  • pod.hpa.autoscaling.banzaicloud.io/customMetricName: "{targetAverageValue}" - adds a Pods type metric with targetAverageValue set as specified, where targetAverageValue is a Quantity.

To use custom metrics from Prometheus, you have to deploy Prometheus Adapter and Metrics Server, explained in detail in our previous post about using HPA with custom metrics

Custom metrics from version 0.1.5

From version 0.1.5 we have removed support for Pod type custom metrics and added support for Prometheus backed custom metrics exposed by Kube Metrics Adapter. To setup HPA based on Prometheus one has to setup the following deployment annotations:

prometheus.customMetricName.hpa.autoscaling.banzaicloud.io/query: "sum({kubernetes_pod_name=~"^YOUR_POD_NAME.*",__name__=~"YOUR_PROMETHUES_METRICNAME"})" prometheus.customMetricName.hpa.autoscaling.banzaicloud.io/targetValue: "{targetValue}" prometheus.customMetricName.hpa.autoscaling.banzaicloud.io/targetAverageValue: "{targetAverageValue}"

The query should be a syntactically correct Prometheus query. Pay attention to select only metrics related to your Deployment / Pod / Service. You should specify either targetValue or targetAverageValue, in which case metric value is averaged with current replica count.

Quick usage example

Let's pick Kafka as an example chart, from our curated list of Banzai Cloud Helm charts. The Kafka chart by default doesn't contains any HPA resources, however it allows specifying Pod annotations as params so it's a good example to start with. Now let's see how you can add a simple cpu based autoscale rule for Kafka brokers by adding some simple annotations:

  1. Deploy operator
 helm install banzaicloud-stable/hpa-operator
  1. Deploy Kafka chart, with autoscale annotations
 cat > values.yaml <<EOF
 {
     "statefullset": {
        "annotations": {
             hpa.autoscaling.banzaicloud.io/minReplicas: "3"
             hpa.autoscaling.banzaicloud.io/maxReplicas: "8"
             cpu.hpa.autoscaling.banzaicloud.io/targetAverageUtilization: "60"
        }
     }
 }
 EOF

 helm install -f values.yaml banzaicloud-stable/kafka
  1. Check if HPA is created
 kubectl get hpa

 NAME      REFERENCE           TARGETS           MINPODS   MAXPODS   REPLICAS   AGE
 kafka     StatefulSet/kafka   3% / 60%          3         8         1          1m

Happy Autoscaling!

hpa-operator's People

Contributors

sancyx avatar kozmagabor avatar matyix avatar sagikazarmark avatar normjohniv avatar pavel-kazhavets avatar pbalogh-sa avatar asdwsda avatar piotrgo avatar torjue avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.