Code Monkey home page Code Monkey logo

k8s-ttl-controller's Introduction

k8s-ttl-controller

test

This application allow you to specify a TTL (time to live) on your Kubernetes resources. Once the TTL is reached, the resource will be automatically deleted.

To configure the TTL, all you have to do is annotate the relevant resource(s) with k8s-ttl-controller.twin.sh/ttl and a value such as 30m, 24h and 7d.

The resource is deleted after the current timestamp surpasses the sum of the resource's metadata.creationTimestamp and the duration specified by the k8s-ttl-controller.twin.sh/ttl annotation.

Usage

Setting a TTL on a resource

To set a TTL on a resource, all you have to do is add the annotation k8s-ttl-controller.twin.sh/ttl on the resource you want to eventually expire with a duration from the creation of the resource as value.

In other words, if you had a pod named hello-world that was created 20 minutes ago, and you annotated it with:

kubectl annotate pod hello-world k8s-ttl-controller.twin.sh/ttl=1h

The pod hello-world would be deleted in approximately 40 minutes, because 20 minutes have already elapsed, leaving 40 minutes until the target TTL of 1h is reached.

Alternatively, you can create resources with the annotation already present:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  annotations:
    k8s-ttl-controller.twin.sh/ttl: "1h"
spec:
  containers:
    - name: web
      image: nginx

The above would cause the pod to be deleted 1 hour after its creation.

This is especially useful if you want to create temporary resources without having to worry about unnecessary resources accumulating over time.

Deploying on Kubernetes

Using Helm

For the chart associated to this project, see TwiN/helm-charts:

helm repo add twin https://twin.github.io/helm-charts
helm repo update
helm install k8s-ttl-controller twin/k8s-ttl-controller -n kube-system

Using a YAML file

apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-ttl-controller
  namespace: kube-system
  labels:
    app: k8s-ttl-controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: k8s-ttl-controller
  labels:
    app: k8s-ttl-controller
rules:
  - apiGroups:
      - "*"
    resources:
      - "*"
    verbs:
      - "get"
      - "list"
      - "delete"
  - apiGroups:
      - ""
      - "events.k8s.io"
    resources:
      - "events"
    verbs:
      - "create"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-ttl-controller
  labels:
    app: k8s-ttl-controller
roleRef:
  kind: ClusterRole
  name: k8s-ttl-controller
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: ServiceAccount
    name: k8s-ttl-controller
    namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-ttl-controller
  namespace: kube-system
  labels:
    app: k8s-ttl-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8s-ttl-controller
  template:
    metadata:
      labels:
        app: k8s-ttl-controller
    spec:
      automountServiceAccountToken: true
      serviceAccountName: k8s-ttl-controller
      restartPolicy: Always
      dnsPolicy: Default
      containers:
        - name: k8s-ttl-controller
          image: ghcr.io/twin/k8s-ttl-controller
          imagePullPolicy: Always

Docker

docker pull ghcr.io/twin/k8s-ttl-controller

Development

First, you need to configure your kubeconfig to point to an existing, accessible cluster from your machine so that kubectl can be used.

If you don't have one or wish to use a different cluster, you can create a kind cluster using the following command:

make kind-create-cluster

Next, you must start k8s-ttl-controller locally:

make run

To test the application, you can create any resource and annotate it with the k8s-ttl-controller.twin.sh/ttl annotation:

kubectl run nginx --image=nginx
kubectl annotate pod nginx k8s-ttl-controller.twin.sh/ttl=1h

You should then see something like this in the logs:

2022/07/10 13:31:40 [pods/nginx] is configured with a TTL of 1h, which means it will expire in 57m10s

If you want to ensure that expired resources are properly deleted, you can simply set a very low TTL, such as:

kubectl annotate pod nginx k8s-ttl-controller.twin.sh/ttl=1s

You would then see something like this in the logs:

2022/07/10 13:36:53 [pods/nginx2] is configured with a TTL of 1s, which means it has expired 2m3s ago
2022/07/10 13:36:53 [pods/nginx2] deleted

To clean up the kind cluster:

make kind-clean

Debugging

To enable debugging logs, you may set the DEBUG environment variable to true

k8s-ttl-controller's People

Contributors

dependabot[bot] avatar twin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

k8s-ttl-controller's Issues

Not compatible with k8s 1.25+

Describe the bug

It looks like the controller is using k8s API that were removed in k8s 1.25.

I was wondering if you're planning to release updated version to support more recent k8s APIs.

What do you see?

Errors about accessing k8s APIs removed in version 1.25

What do you expect to see?

I'd like the controller to be compatible with k8s API 1.25+

List the steps that must be taken to reproduce this issue

Deploy the controller to k8s 1.25 and up

Version

latest

Additional information

thanks for providing this useful tool!

k8s 1.29 compatability

Describe the bug

AWS EKS is reporting an API compatibility issue with k8s-ttl-controller on k8s 1.29. In particular, it is apparently using the /apis/flowcontrol.apiserver.k8s.io/v1beta2/flowschemas and apis/flowcontrol.apiserver.k8s.io/v1beta2/prioritylevelconfigurations APIs, both of which are removed in 1.29 and replaced with their beta3 equivalents.

What do you see?

No response

What do you expect to see?

No response

List the steps that must be taken to reproduce this issue

  1. Install the and use controller on AWS EKS (v1.28) cluster with the helm chart (overriding version to 1.3.0).
  2. Check the "Update Insights" tab for the EKS cluster.

Version

1.3.0

Additional information

No response

Automatically drain nodes if their TTL expires

Describe the feature request

If a resource of type node has a TTL which expired, attempt to drain the node before deleting it.

Note that for the first implementation, the drain process can be non-graceful (i.e. it can use the equivalent of kubectl drain node --disable-eviction, which bypasses PDBs).

Why do you personally want this feature to be implemented?

To handle node deletion slighly more gracefully.
Hopefully, it will work alongside cluster-autoscaler to provide a slightly better experience than just deleting the node.

How long have you been using this project?

No response

Additional information

No response

Scale deployment to 0 instead of delete

Describe the feature request

It would be nice to have the option to scale applicable resources rather than delete, eg a Deployment.
This is so you can still save the resources of not having replicas running but keep the actual deployment resource in the cluster. So when a team returns to the application it's simply a case of scaling the deployment again

Why do you personally want this feature to be implemented?

  • Make it easier to scale temporary workloads down to save computing power on my local cluster.
  • Also would be helpful in a GitOps pattern as replicas are easy to ignore changes to, but ignoring a whole resource is more difficult and defeats the point of GitOps a bit.
  • Enable a cluster operator to recover resources from tenants easily without completely deleting resources.

How long have you been using this project?

couple of weeks just exploring it

Additional information

Perhaps could be implemented as a second annotation:

k8s-ttl-controller.twin.sh/type="scale" # default "delete" if not specified, reject other values

Maybe this crosses over to HPA territory, but I was just looking for a simple solution, which this operator provides, one that doesn't require prior setup or additional yaml, only an annotation, or two :)

Memory usage buildup

Describe the bug

After running the deployment for some time i noticed that it uses more and more memory as the deployment lives. This should not happen.

What do you see?

image

What do you expect to see?

Memory usage stabilizes on a certain point

List the steps that must be taken to reproduce this issue

  1. Start application
  2. Monitor the application over time

Version

0.2.0 version of helmchart

Additional information

No response

TTL notification

Describe the feature request

I was wondering if you're planning to add notifications via email to send notifications when ttl about to expire.

Why do you personally want this feature to be implemented?

No response

How long have you been using this project?

No response

Additional information

No response

configurable frequency time to check for annotations and reduce logging to show only errors and objects deleted.

Describe the feature request

right now the code check every five minutes and scans for all the objects for the annotations. i would be good if can have a configurable time for it.
2023/07/27 07:18:10 Execution took 9840ms, sleeping for 5m0s
2023/07/27 07:23:19 Execution took 9752ms, sleeping for 5m0s
2023/07/27 07:28:29 Execution took 9734ms, sleeping for 5m0s
2023/07/27 07:33:30 [persistentvolumeclaims/nifi-state-nifi-0] is configured with a TTL of 5s, which means it has expired 42h54m8s ago
2023/07/27 07:33:30 [persistentvolumeclaims/nifi-state-nifi-0] deleted
2023/07/27 07:33:32 [statefulsets/nifi] is configured with a TTL of 5s, which means it has expired 18m39s ago
2023/07/27 07:33:32 [statefulsets/nifi] deleted
2023/07/27 07:33:39 Execution took 10125ms, sleeping for 5m0s

Why do you personally want this feature to be implemented?

it would be nice if its configurable or made intelligent using watch. that would

How long have you been using this project?

1 month

Additional information

abiliity to watch for annotations or a configurable frequency to check
reduce logging to show only error and objects deleted.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.